Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Timing detectors.

Thèses sur le sujet « Timing detectors »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Timing detectors ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Carulla, Areste Maria del Mar. « Thin LG AD timing detectors for the ATLAS experiment ». Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667283.

Texte intégral
Résumé :
El gran col· lisionador de hadrons (LHC), el qual te un circumferencia de 27 quilometres, és l'accelerador de partícules més gran i potent del món. El LHC esta dissenyat per fer col· lisionar protons a 14 TeV en el seu centre de massa, aconseguint una lluminositat de 1024 cm-2 s-1, quan 2808 paquets amb 1011 protons cada un, circulen per l'accelerador. Els paquets estan espaiats per 25 ns, que corresponen a una freqüencia de col · lisió de 40 MHz en cada un dels quatre punts d'interacció. Actualment, la principal prioritat de la estrategia europea per la física de partícules consisteix la explotació del LHC al seu maxim potencial. Es per aquest motiu que una actualització del LHC va ser planejada per tal d'assolir una nova fase d'intensa lluminositat del LHC (HL­ LHC). Aquesta nova fase del LHC (HL-LHC) necessitara una actualització tant de la maquina com dels detectors per tal que en 2030 es col · lecti deu vegades més dades que en el disseny inicial. El principals reptes que hauran de combatre els detectors per aquesta nova fase d'intensa lluminositat seran l'increment de l'ocupació, del pile-up, del ritme de dades i la resistencia a la radiació dels detectors. L'increment en l'ocupació sera mitigat utilitzant detectors amb major granularitat. Per altra banda detectors amb una resolució temporal de 30 ps seran utilitzats per recluir el pile-up. L'objectiu de la present tesis es el disseny, desenvolupament i estudi de detectors de silici amb una elevada granularitat i amb una resolució temporal de 30 ps, els quals compleixen amb les especificacions necessaries per l'actualització del experiment ATLAS (A Toroidal LHC Apparatus). La col· laboració RD50, la qual investiga en l'estudi de detectors resistents a la radiació, ha proposat els detectors de allau amb baix guany (LGAD) com a detectors per a timing en les Endcap Timing Layers (ETL). En aquesta tesis tres estrategies diferents s'han dut a terme per tal de complir amb les especificacions de granularitat, resolució temporal i resistencia a la radiació de la ETL. La primera estrategia ha consistit en la reducción del gruix dels LGAD per tal de recluir el temps de col · lecció, el temps de pujada i la contribució del intrínsec Landau noise en la resolució temporal. La segona estrategia duta a terme ha estat la minimització de la capacitat per tal de recluir el soroll i la contribució del jitter en la resolució temporal, desenvolupant detectors LGAD pixelats i en strips. Finalment, l'última estrategia s'ha focalitzat en l'ús de diferents impureses per tal de recluir efectes de la radiació derivats de l'ús del bor coma impuresa dopant. La estructura d'aquesta tesis es la següent: en el capítol 2 s'introdueixen el experiments d'altes energies on els detectors seran col · locats, les especificacions que han de complir els detectors en la nova fase d'intensa lluminositat, el funcionament dels detectors, la determinació de la resolució temporal, els efectes de la radiació en els detectors i un resum dels detectors utilitzats actualment per realitzar mesures de timing; en el capítol 3 es presenta les simulacions tecnologiques i experimentals dels detectors dissenyats; en el capítol 4 es resumeix els processos tecnológics realitzats en la fabricació dels detectors; el capítol 5 presenta els resultats cxpcrirncntals dcls dispositius fabricats aba.ns i dcsprós de irradiar-los: el capítol 6 sintctitza la simulació, producció i resultats dels inverse Low Gain Avalanche Detectors (i-LGAD), i el capítol 7 presenta les conclusions i treball fntur dels detectors fabricats.
The Large Hadron Collider (LHC) with its 27 kilometer in circumference is the world's largest and most powerful particle accelerator. The LHC was designed to collide protons at 14 TeV energy at the center-of-mass. The design luminosity is 1034 cm2 s-1, which is achieved with 2808 circulating bunches, each with - 1011 protons. Bunches are spaced by 25ns, corresponding to a collision rate of 40 MHz at each of the four interaction points. The main priority of the European Strategy for P article Physics is the exploitation of the full potential of the LHC. An upgrade of the LHC to the high-luminosity LHC (HL-LHC) was planned for this purpose. The HL-LHC will require an upgrade of the machine and detectors with a view to collecting ten times more data than in the initial design, by around 2030. The major challenges for the high-luminosity phase are the occupancy, pile-up, high data rates, and radiation tolerance of the detectors. The increase in occupancy will be mitigated using higher granularity. Fast timing detectors with time resolution in the range of 30 ps will be used to reduce pile-up. Fur thermore , precision timing will provide additional physics capabilities. The purpose of the present thesis is the design, development and study of silicon detectors with high granularity and 30 ps time resolution suitable for the upgrade of the A Toroidal LHC Apparatus (ATLAS) experiment in the HL-LHC phase. Low Gain Avalanche Detectors (LGAD) have been proposed by RD50 collaboration as timing detectors for the Endcap Timing Layer (ETL) of ATLAS experiment. Three different strategies have been studied in order to fulfil with the high granularity, time resolution and radiation hardness specifications of devices for the ET L. The first strategy has consisted in detectors thickness reduction to decrease its collection time, rise time and intrinsic Landau noise. The second strategy has been the minimization of the capacitance developing strips and pixels with gain. Finally, the last strategy has lied in the use of other dopants to reduce radiation effects as boron removal. The structure of the thesis is as follows: chapter 2 introduces the major issues in the LHC upgrade, the CERN experiments, the required specifications of particle detectors for the HL­ LHC phase, their working principles, the measurement of time resolution, the microscopic and macroscopic radiation effects, and the state of the art in timing detectors; chapter 3 presents the technological and electrical simulation of the designed devices after the calibration of the technological simulation with the process characterization; chapter 4 gives an outline of the different device processes; chapter 5 presents the obtained results of unirradiated and irradiated devices; chapter 6 condenses the simulation, production and results of inverse Low Gain Avalanche Detectors (i-LGAD), and chapter 7 reports the conclusions and future work of the measured devices.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Najafi, Faraz. « Timing performance of superconducting nanowire single-photon detectors ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97816.

Texte intégral
Résumé :
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-89).
Superconducting nanowire single-photon detectors (SNSPDs) are becoming increasingly popular for applications in quantum information and long-distance communication. While the detection efficiency of SNSPDs has significantly improved over time, their timing performance has largely remained unchanged. Furthermore, the photodetection process in superconducting nanowires is still not fully understood and subject to ongoing research. In this thesis, I will present a systematic study of the timing performance of different types of nanowire single-photon detectors. I will analyze the photodetection delay histogram (also called instrument response function IRF) of these detectors as a function of bias current, nanowire width and wavelength. The study of the IRF yielded several unexpected results, among them a wavelength-dependent exponential tail of the IRF and a discrepancy between experimental photodetection delay results and the predicted value based on the electrothermal model. These results reveal some shortcomings of the basic models used for SNSPDs, and may include a signature of the initial process by which photons are detected in superconducting nanowires. I will conclude this thesis by presenting a brief introduction into vortices, which have recently become a popular starting point for photodetection models for SNSPDs. Building on prior work, I will show that a simple image method can be used to calculate the current flow in presence of a vortex, and discuss possible implications of recent vortex-based models for timing jitter.
by Faraz Najafi.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sjöström, Fredrik. « Auto-triggering studies of Low Gain Avalanche Detectors for the ATLAS High-Granularity Timing Detector ». Thesis, KTH, Fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253905.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lacasa, Calvo Luis. « Investigation of Variety of Non-CoherentFront end Detectors For Timing Estimation ». Thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138045.

Texte intégral
Résumé :
The indoor localization of mobile users is currently a central issue for many applications and fields, including sensor networks, asset management, healthcare, ambient-assisted living, and public safety personnel localization. Existing solutions often rely on the fusion of information from multiple sensors. The potential of using an ultra wideband (UWB) system for wireless distance measurement based on the round-trip time (RTT) has been investigated in this thesis. Non-coherent UWB receivers have been analyzed using two different approaches: amplitude detection and energy detection. Both non-coherent UWB receivers front ends have been designed and implemented. Simulations of the measurement performance are also provided. Furthermore, a method has been proposed using undersampling over a burst of UWB pulses to reconstruct the original pulse and try to approximate the optimal performance of the ideal UWB receiver. The simulations yield interesting results regarding the performance of the RTT estimation. Both detection techniques are compared, describing the advantages and disadvantages of each one.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Strazzi, Sofia. « Study of first thin LGAD prototypes for the ALICE 3 timing layers ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24382/.

Texte intégral
Résumé :
The work here presented concerns the characterization and the performance study of very thin Low-Gain Avalanche Detector (LGAD) prototypes; the goal is to evaluate if such a sensor is suitable for the Time-Of-Flight (TOF) system of the ALICE 3 experiment, a next generation heavy-ion experiment (LHC Run 5). A total of 18 sensors with a thickness of 25 μm and 35 μm were characterized; both single channel and matrices, with different inter-pad design and doping profile were compared to two 50 μm-prototypes. Preliminary tests with a laser setup allowed to evaluate the light-sensitive areas in terms of efficiency, uniformity of the response and edge effects. Finally, timing performances were analyzed. Promising results were found for the 25 μm-thick sensors, which showed a time resolution better than 16 ps for a gain 20 and reaching nearly 13 ps for a gain 30.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sidorova, Mariia. « Timing Jitter and Electron-Phonon Interaction in Superconducting Nanowire Single-Photon Detectors (SNSPDs) ». Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22296.

Texte intégral
Résumé :
Die vorliegende Doktorarbeit beschäftigt sich mit der experimentellen Studie zweier miteinander verbundener Phänomene: Dem intrinsischen Timing-Jitter in einem supraleitendenden Nanodraht-Einzelphotonen-Detektor (SNSPD) und der Relaxation der Elektronenenergie in supraleitenden Filmen. Supraleitende Nanodrähte auf einem dielektrischen Substrat als mikroskopische Grundbausteine jeglicher SNSPDs stellen sowohl für theoretische als auch für experimentelle Studien komplexe Objekte dar. Die Komplexität ergibt sich aus der Tatsache, dass SNSPDs in der Praxis stark ungeordnete und ultradünne supraleitende Filme verwenden, die eine akustische Fehlanpassung zu dem zugrundeliegenden Substrat aufweisen und einen Nichtgleichgewichts-Zustand implizieren. Die Arbeit untersucht die Komplexität des am weitesten in der SNSPD Technologie verbreiteten Materials, Niobnitrid (NbN), indem verschiedene experimentelle Methoden angewandt werden. Als eine mögliche Anwendung der SNSPD-Technologie wird ein Prototyp eines dispersiven Raman-Spektrometers mit Einzelphotonen-Sensitivität demonstriert.
This Ph.D. thesis is based on the experimental study of two mutually interconnected phenomena: intrinsic timing jitter in superconducting nanowire single-photon detectors (SNSPDs) and relaxation of the electron energy in superconducting films. Microscopically, a building element of any SNSPD device, a superconducting nanowire on top of a dielectric substrate, represents a complex object for both experimental and theoretical studies. The complexity arises because, in practice, the SNSPD utilizes strongly disordered and ultrathin superconducting films, which acoustically mismatch with the underlying substrate, and implies a non-equilibrium state. This thesis addresses the complexity of the most conventional superconducting material used in SNSPD technology, niobium nitride (NbN), by applying several distinct experimental techniques. As an emerging application of the SNSPD technology, we demonstrate a prototype of the dispersive Raman spectrometer with single-photon sensitivity.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Feroci, M., E. Bozzo, S. Brandt, M. Hernanz, der Klis M. van, L. P. Liu, P. Orleanski et al. « The LOFT mission concept : a status update ». SPIE-INT SOC OPTICAL ENGINEERING, 2016. http://hdl.handle.net/10150/622719.

Texte intégral
Résumé :
The Large Observatory For x-ray Timing (LOFT) is a mission concept which was proposed to ESA as M3 and M4 candidate in the framework of the Cosmic Vision 2015-2025 program. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument and the uniquely large field of view of its wide field monitor, LOFT will be able to study the behaviour of matter in extreme conditions such as the strong gravitational field in the innermost regions close to black holes and neutron stars and the supra-nuclear densities in the interiors of neutron stars. The science payload is based on a Large Area Detector (LAD, > 8m(2) effective area, 2-30 keV, 240 eV spectral resolution, 1 degree collimated field of view) and a Wide Field Monitor (WFM, 2-50 keV, 4 steradian field of view, 1 arcmin source location accuracy, 300 eV spectral resolution). The WFM is equipped with an on-board system for bright events (e. g., GRB) localization. The trigger time and position of these events are broadcast to the ground within 30 s from discovery. In this paper we present the current technical and programmatic status of the mission.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Johnson, Jeremy Ryan. « Fault propagation timing analysis to aid in the selection of sensors for health management systems ». Diss., Rolla, Mo. : University of Missouri--Rolla i.e. [Missouri University of Science and Technology], 2008. http://scholarsmine.mst.edu/thesis/pdf/Johnson_09007dcc804bcda7.pdf.

Texte intégral
Résumé :
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 19, 2008) Degree granted by Missouri University of Science and Technology, formerly known as University of Missouri--Rolla. Includes bibliographical references (p. 39-41).
Styles APA, Harvard, Vancouver, ISO, etc.
9

Hancock, Jason. « Evaluation of the timing characteristics of various PET detectors using a time alignment probe ». Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18467.

Texte intégral
Résumé :
Time alignment is performed on a conventional PET scanner in order to reduce the noise in the image from undesirable interactions, called randoms. In time of flight scanners this alignment is even more critical in order to place the position of an annihilation accurately. Traditionally, the alignment is an iterative process done by adjusting time offsets and recording the count rate until it is maximized. We have designed and built a positron detector that can be placed in the PET scanner. This enables each crystal in the scanner to be aligned to the same event (the positron detection), providing a constant reference to each crystal. This both increases the accuracy of the alignment and the speed in which it can be done.
L'alignement de temps est effectué sur un TEP conventionnelle pour réduire le bruit dans l'image causé par des interactions hasard. Dans les appareils utilisant le temps-de-vol, cet alignement est essentiel pour bien connaitre la position exacte de l'annihilation. Traditionnellement, l'alignement est un processus répétitif accompli en ajustant les décalés de temps et en enregistrant le taux de compte jusqu'il soit maximisé. Nous avons créé un détecteur de positron que nous pouvons placer l'intrieur du PET. Ceci nous permet d'aligner chaque cristal dans le scanner au même événement (la détection de positron), et de fournir une référence constante à chaque cristal. Ceci augmente la précision et la vitesse de l'alignement.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sjölin, Martin. « On the Fundamental Limitations of Timing and Energy Resolution for Silicon Detectors in PET Applications ». Thesis, KTH, Fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101790.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Sidorova, Mariia [Verfasser]. « Timing Jitter and Electron-Phonon Interaction in Superconducting Nanowire Single-Photon Detectors (SNSPDs) / Mariia Sidorova ». Berlin : Humboldt-Universität zu Berlin, 2021. http://d-nb.info/1226153380/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Thalhammer, Christof. « Improving the light yield and timing resolution of scintillator-based detectors for positron emission tomography ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17248.

Texte intégral
Résumé :
Positronen-Emissions-Tomographie (PET) ist eine funktionelle medizinische Bildgebungstechnik. Die Lichtausbeute und Zeitauflösung Szintillator basierter PET Detektoren wird von diversen optischen Prozessen begrenzt. Dazu gehört die Lichtauskopplung aus Medien mit hohem Brechungsindex sowie Sensitivitätsbegrenzungen der Photodetektoren. Diese Arbeit studiert mikro- und nano-optische Ansätze um diese Einschränkungen zu überwinden mit dem Ziel das Signal-Rausch Verhältnis sowie die Bildqualität zu verbessern. Dafür wird ein Lichtkonzentrator vorgeschlagen um die Sensitivität von Silizium Photomultipliern zu erhöhen sowie dünne Schichten photonischer Kristalle um die Lichtauskopplung aus Szintillatoren zu verbessern. Die Ansätze werden mit optischen Monte Carlo Simulationen studiert, wobei die Beugungseigenschaften phot. Kristalle hierbei durch eine neuartige kombinierte Methode berücksichtigt werden. Proben der phot. Kristalle und Lichtkonzentratoren wurden mit Fertigungsprozessen der Halbleitertechnologie hergestellt und mit Hilfe eines Goniometer Aufbaus charakterisiert. Die simulierten Eigenschaften konnten hiermit sehr gut experimentell reproduziert werden. Daraufhin wurden Simulationen durchgeführt um den Einfluss beider Konzepte auf die Charakteristika eines PET Detektors zu untersuchen. Diese sagen signifikante Verbesserungen der Lichtausbeute und Zeitauflösung voraus. Darüber hinaus zeigen sie, dass sich auch die Kombination beider Ansätze positiv auf die Detektoreigenschaften auswirken. Diese Ergebnisse wurden in Lichtkonzentrator-Experimenten mit einzelnen Szintillatoren bestätigt. Da die Herstellung phot. Kristalle eine große technologische Herausforderung darstellt, wurde eine neue Fertigungstechnik namens "direct nano imprinting" entwickelt. Dessen Machbarkeit wurde auf Glasswafern demonstriert. Die Arbeit endet mit einer Diskussion der Vor- und Nachteile von Lichtkonzentratoren und phot. Kristallen und deren Implikationen für zukünftige PET Systeme.
Positron emission tomography (PET) is a powerful medical imaging methodology to study functional processes. The light yield and coincident resolving time (CRT) of scintillator-based PET detectors are constrained by optical processes. These include light trapping in high refractive index media and incomplete light collection by photosensors. This work proposes the use of micro and nano optical devices to overcome these limitations with the ultimate goal to improve the signal-to-noise ratio and overall image quality of PET acquisitions. For this, a light concentrator (LC) to improve the light collection of silicon photomultipliers on the Geiger-cell level is studied. Further, two-dimensional photonic crystals (PhCs) are proposed to reduced light trapping in scintillators. The concepts are studied in detail using optical Monte Carlo simulations. To account for the diffractive properties of PhCs, a novel combined simulation approach is presented that integrates results of a Maxwell solver into a ray tracing algorithm. Samples of LCs and PhCs are fabricated with various semiconductor technologies and evaluated using a goniometer setup. A comparison between measured and simulated angular characteristics reveal very good agreement. Simulation studies of implementing LCs and PhCs into a PET detector module predict significant improvements of the light yield and CRT. Also, combining both concepts indicates no adverse effects but a rather a cumulative benefit for the detector performance. Concentrator experiments with individual scintillators confirm these simulation results. Realizing the challenges of transferring PhCs to scintillators, a novel fabrication method called direct nano imprinting is evaluated. The feasibility of this approach is demonstrated on glass wafers. The work concludes with a discussion of the benefits and drawbacks of LCs and PhCs and their implications for future PET systems.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Laso, Garcia Alejandro. « Timing Resistive Plate Chambers with Ceramic Electrodes ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-163270.

Texte intégral
Résumé :
The focus of this thesis is the development of Resistive Plate Chambers (RPCs) with ceramic electrodes. The use of ceramic composites, Si3N4/SiC, opens the way for the application of RPCs in harsh radiation environments. Future Experiments like the Compressed Baryonic Matter (CBM) at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt will need new RPCs with high rate capabilities and high radiation tolerance. Ceramic composites are specially suited for this purpose due to their resistance to radiation and chemical contamination. The bulk resistivity of these ceramics is in the range 10^7 - 10^13 Ohm cm. The bulk resistivity of the electrodes is the main factor determining the rate capabilities of a RPC, therefore a specifific measuring station and a measurement protocol has been set for these measurements. The dependence of the bulk resistivity on the difffferent steps of the manufacturing process has been studied. Other electrical parameters like the relaxation time, the relative permittivity and the tangent loss have also been investigated. Simulation codes for the investigation of RPC functionality was developed using the gas detectors simulation framework GARFIELD++. The parameters of the two mixtures used in RPC operation have been extracted. Furthermore, theoretical predictions on time resolution and effi ciency have been calculated and compared with experimental results. Two ceramic materials have been used to assemble RPCs. Si3N4/SiC and Al2O3 with a thin (nm thick) chromium layer deposited over it. Several prototypes have been assembled with active areas of 5x 5 cm^2, 10x 10 cm^2 and 20 x20 cm^2. The number of gaps ranges from two to six. The gas gap widths were 250 micro meter and 300 micrometer. As separator material mylar foils, fifishing line and high-resistive ceramics have been used. Different detector architectures have been built and their effffect on RPC performance analysed. The RPCs developed at HZDR and ITEP (Moscow) were systematically tested in electron and proton beams and with cosmic radiation over the course of three years. The performance of the RPCs was extracted from the measured data. The main parameters like time resolution, effi ciency, rate capabilities, cluster size, detector currents and avalanche charge were obtained and compared with other RPC systems in the world. A comparison with phenomenological models was performed.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Vignola, Gianpiero. « Time resolution study of SiPMs as tracker elements for the ALICE 3 timing layer ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23512/.

Texte intégral
Résumé :
Following the present ALICE, a next generation experiment at the LHC is under discussion (for LHC-Run5): ALICE 3. The idea is to have a superior tracking, vertexing, and timing all-silicon detector. For the Particle IDentification via Time Of Flight (PID-TOF), a detector capable of 20 ps time resolution positioned at 1 m from the interaction point is required. For this aim, a huge R&D phase on different technologies has just begun. Preliminary studies will be reported. In particular, the first time resolution study using SiPM detectors directly detecting Minimum Ionizing Particles will be discussed. The results will be also compared with laser measurements.
Styles APA, Harvard, Vancouver, ISO, etc.
15

CAPELLI, SIMONE. « Search for lepton flavour violating τ^+ → µ^+ µ^− µ^+ decay at LHCb and study on MCP-PMT detector for future LHCb Upgrade ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/403459.

Texte intégral
Résumé :
La parte principale della mia attività di ricerca svolta durante il dottorato consiste nella analisi di dati raccolti dalla collaborazione LHCb del CERN durante il Run2 (dal 2016 al 2018). Lo scopo di questo lavoro consiste nella ricerca del decadimento del leptone τ in tre muoni (τ^+→µ^+µ^-µ^+) Questo decadimento raro non ancora osservato violerebbe la conservazione del sapore leptonico, una quantità conservata accidentalmente nel Modello Standard (SM). Nell'estensione minimale del Modello Standard che include neutrini massivi, il rateo di decadimento atteso B(τ^+→µ^+µ^-µ^+) è previsto essere dell'ordine O($10^{-55}$), ben al di sotto del livello di sensibilità degli attuali e prossimi futuri esperimenti. Vi sono tuttavia teorie di Fisica oltre il Modello Standard (BSM) per le quali è previsto un aumento del rateo di decadimento di τ^+→µ^+µ^-µ^+ fino a valori O($10^{-10}$). A oggi questo processo non è ancora stato osservato direttamente, ne da esperimenti a collisori leptonici (BaBar, Belle) ne a collisori adronici (LHCb), sono stati invece posti limiti superiori, che impongono limiti sempre più stringenti sulle teorie BSM. Una eventuale osservazione di questo decadimento sarebbe un chiaro segnale di nuova Fisica. L'analisi è stata svolta separatamente per ogni anno, utilizzando il decadimento Ds^+→φ(µ^+µ^-)π^+ come canale di riferimento rispetto a cui calcolare il B(τ^+→µ^+µ^-µ^+) in bin dei classificatori. È stato utilizzato il metodo CLs per il calcolo del limite atteso, che risulta essere 1.8(2.2) x10^{-8} al 90%(95%) di C.L. Gli studi di decadimenti molto rari come τ^+→µ^+µ^-µ^+ beneficeranno notevolmente dall'aumento di dati raccolti durante l'attuale Run3 e il prossimo Run4 dal rivelatore recentemente rinnovato. Durante la futura fase di alta luminosità che per l'esperimento LHCb comincerà con il Run5, esso sarà in grado di acquisire una maggiore quantità di dati grazie alla luminosità 10 volte superiore. Per fare questo tuttavia è necessario progettare e sviluppare nuove componenti per rimpiazzare quelle attuali, che non saranno in grado di sostenere le nuove condizioni di funzionamento. Per una parte del mio progetto di dottorato mi sono occupato della caratterizzazione della risposta temporale di un fotomoltiplicatore candidato per l'aggiornamento del Ring Imaging Cherenkov (RICH). Per il futuro upgrade è stato proposto di sfruttare l'informazione temporale delle tracce per ridurre il livello di pile-up. Fotorivelatori basati su MCP sono caratterizzati da una ottima risoluzione temporale, ma il loro impiego ad alti ratei di fotoni è complicato dalla saturazione a cui vanno incontro. Il rate atteso nella fase di alta luminosità è di circa 10MHz/mm$^2$. L'Auratek-Square è uno strumento multianodo a microcanali (MCP-PMT) prodotto da Photek di 53x53 mm con 64x64 anodi raggruppati in 8x8 pixels. Ne è stata caratterizzata la risoluzione temporale in regime di singolo fotone, in funzione della tensione di alimentazione e del rateo di fotoni. Il rivelatore mostra eccellenti performance quando un singolo pixel viene illuminato, mostrando uno sparpagliamento del tempo di transito di circa 100ps FWHM quando esposto a un rateo di fotoni fino a ∼100kHz/mm$^2$. Oltre tale soglia il fotomoltiplicatore satura e la risoluzione temporale peggiora velocemente. È possibile mitigare questo peggioramento riducendo sia la differenza di potenziale presente tra il fotocatodo e l’ingresso del MCP che la differenza di potenziale tra i piani del MCP, lavorando a basso guadagno. La capacità di timing è influenzata anche dal fenomeno di condivisione di carica (charge sharing) tra pixel adiacenti, che porta la risoluzione temporale a circa 170ps FWHM quando l’intera superficie del rivelatore è illuminata e che può risultare una delle sorgenti principali di crosstalk se non viene adeguatamente considerata.
The physics analysis has been the primary focus of my research activity during the PhD. Within the CERN LHCb collaboration, I've performed an analysis of data collected during the LHC Run2 (2016, 2017 and 2018). The aim of this work is the search for the decay of the τ lepton into three muons (τ^+→µ^+µ^-µ^+), a decay that would violate the conservation of charged lepton flavour number (cLFV). The lepton flavour is an accidental symmetry of the Standard Model, and without the oscillations of neutrinos such decay would be prohibited. In the Minimal extended Standard Model the branching ratio B(τ^+→µ^+µ^-µ^+) is expected to be O($10^{-55}$), well below current and foreseen experimental sensitivity. Theories of physics beyond the Standard Model predict an enhancement of the τ^+→µ^+µ^-µ^+ decay within present experimental sensitivity O($10^{-10}$). This decay has not been observed to date, only upper limits have been established by B-factories (BaBar, Belle) or by hadron collider experiments (LHCb). The upper limit improvement implies strengthen of the constraints on exotic theories, while an observation of the decay would be a clear signal of New Physics. The analysis is performed separately for each year, and the data is divided into two subsamples depending on the number of muon candidates triggered by the LHCb muon system. Multivariate models are used to distinguish signal and background to enhance the signal sensitivity, and to define correction for data-simulation agreement. The Ds^+→φ(µ^+µ^-)π^+ channel is used as a reference channel to estimate the upper limit on the branching fraction. The expected upper limit is computed with the CLs method and results in 1.8(2.2)x10$^{-8}$ at 90%(95%) C.L.. The τ^+→µ^+µ^-µ^+ is an example of a very rare decay, and the analysis involving such decays will benefit from the increment of statistics that will be collected in the current Run3 and in the following Run4 period of data taking at the upgraded LHCb. The High-Luminosity phase of LHCb, starting with Run5 of the LHC, will provide a further boost to the amount of available data. The LHCb detector will need to undergo a second upgrade, to cope with the x10 increase of luminosity. Numerous studies and R&D projects are currently working on the development of technologies for the future detectors of LHCb. A part of my PhD project was devoted to work on a candidate photodetector for the upgraded Ring Imaging Cherenkov (RICH). I've characterized the timing performance of a multianode microchannel plate photomultiplier (MCP-PMT) in single photon regime. For the second upgrade it has been proposed to improve particle identification performance exploiting the use of precise timing information to cope with the increased pileup. MCP-based devices show excellent time resolution, but their use is critical due to saturation at rate above ~100kHz/mm$^2$. The expected rate that the future devices will have to stand is ~10MHz/mm$^2$. The Auratek-Square MCP-PMT produced by Photek is 53x53mm device with 64x64 anodes grouped into 8x8 pixels. The dependence of the time resolution from the bias voltage and the photon rate was assessed. When operating as single photon counter at low photon rate and with a single pixel illuminated it shows a transit time spread (jitter) of ~100ps FWHM, saturating at high rate, above ~100kHz/mm$^2$. Lowering the bias voltage between the photocathode and the MCP input or between the MCP slabs can reduce the worsening of the time resolution at high rate. The charge sharing between the neighbouring pixels can degrade the time resolution to ~170ps FWHM when the entire pixel area is illuminated, and could become a major crosstalk source if not accounted for.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Sohl, Lukas. « Development of PICOSEC-Micromegas for fast timing in high rate environments ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP084.

Texte intégral
Résumé :
Les futures expériences de physique des particules devront être opérationnelles pour un flux de particules et une luminosité croissants. Plus particulièrement, les détecteurs proches du point d'interaction devront présenter une très bonne robustesse pour faire face à un flux de particules très élevé. De plus, une résolution temporelle de quelques dizaines de picosecondes pour les particules au minimum d’ionisation sera nécessaire pour assurer une séparation nette des vertex reconstruits et réduire l'empilement d’événements. Ce manuscrit a pour sujet l’instrument PICOSEC-Micromegas, un détecteur de particules innovant basé sur la lecture d’un détecteur Micromegas couplé à un radiateur Cherenkov et une photocathode. Dans ce dispositif, chaque électron primaire étant produit à la surface de la photocathode, l’étalement en temps du signal est minimal, alors qu’il peut atteindre plusieurs nanosecondes lorsque les ionisations primaires ont lieu sur le passage d'une particule dans l’espace de dérive. La hauteur de ce dernier est ici du même ordre de grandeur que celle de la région d'amplification (100-200 μm) afin de minimiser l'ionisation directe du gaz. L’espace de dérive est également utilisée comme espace de pré-amplification. Un modèle mathématique, basé sur des simulations GARFIELD++, a été développé pour décrire le développement de l'avalanche de pré-amplification. Il a permis de montrer que la longueur et la multiplication de l'avalanche dans l’espace de dérive sont les facteurs dominants dans la résolution temporelle. Le concept PICOSEC-Micromegas a été étudié avec plusieurs prototypes optimisant les champs électriques, la distance de dérive et le mélange gazeux auprès de l’installation laser du LIDYL (Laboratoire Interactions, Dynamiques et Lasers). Une résolution temporelle de ~44 ps a été obtenue pour un photo‑électron unique. Par ailleurs, des mesures effectuées en faisceau test au CERN ont permis d'obtenir une résolution temporelle de 24 ps pour des muons de 150 GeV, avec un espace de dérive de 200 μm et une photocathode en CsI (10 photoélectrons par MIP). Afin de passer du concept de détection à un démonstrateur plusieurs prototypes ont été développés, en se concentrant sur les propriétés spécifiques nécessaires aux applications futures: segmentation de l'anode, annulation des étincelles, efficacité de la photocathode et robustesse à haut flux de particules. Un prototype à pads hexagonaux a été testé en faisceau et montré une résolution temporelle de ~36 ps dans le pad central. Les performances à haut flux sont testées avec des détecteurs résistifs dans des faisceaux de muons et de pions. Des résolutions temporelles nettement inférieures à 100 ps et un fonctionnement stable en faisceau de pions sont obtenus avec tous les prototypes résistifs. Des matériaux de photocathode robustes, comme alternative au CsI, sont étudiés pour réduire la dégradation due au retour des ions. Les matériaux les plus prometteurs sont le “diamond-like carbon” (DLC) et le carbure de bore (B4C). Compte tenu des résultats obtenus, deux cas d'application sont considérés pour les perspectives de ce programme de R&D. La première application considérée est l'utilisation du détecteur PICOSEC à l'intérieur d'un calorimètre comme couche de synchronisation ou de nombreuses particules secondaires sont produites dans un calorimètre électromagnétique après quelques longueurs de radiation. Une résolution temporelle de ~5 ps est attendue avec le PICOSEC-Micromegas. La seconde application est l'identification des particules par des mesures de temps de vol (TOF) ou PICOSEC-Micromegas devrait permettre de doubler la plage d’impulsion des détecteurs TOF actuels pour la séparation π/Κ avec 3σ
Future particle physics experiments will face an increasing particle flux with rising beam luminosity. Detectors close to the interaction point will need to provide robustness against the high particle flux. Moreover, a time resolution of tens of picosecond for Minimum Ionising Particles will be necessary to ensure a clear vertex separation of the reconstructed secondary particles and to reduce pile-up. This manuscript focusses on the PICOSEC-Micromegas, an innovative particle detector based on the Micromegas readout coupled to a Cherenkov radiator and a photocathode in front of the gaseous volume. In this way, each primary electron is located on the surface of the photocathode, suppressing thus the inevitable time jitter of several nanoseconds, due to the different ionisation positions created by the passage of a particle from the drift region of a gaseous detector. The drift region length is reduced to the same order of magnitude as the amplification region (100-200 μm) to minimise direct gas ionisation, and it is additionally used as a pre-amplification stage. A mathematical model, based on GARFIELD++ simulations, is developed to describe the propagation of the pre-amplification avalanche showing that the length and multiplication of the avalanche in the drift region is the dominant factor in the timing performance. The PICOSEC-Micromegas concept is studied with several prototypes optimising the electric fields, the drift distance, and the gas mixture in the LIDYL (Laboratoire Interactions, Dynamiques et Lasers) UV laser facility. A single photoelectron time resolution of ~44 ps is measured with the shortest tested drift region length of 119,μm and the highest stable field setting. Measurements performed in the secondary particle beam at CERN have resulted in a time resolution of 24 ps for 150 GeV muons with a drift region length of 200 μm and a CsI photocathode providing 10 photoelectrons per MIP. In order to evolve from the detection concept to a versatile instrument, several prototypes are developed, focusing on specific properties needed for future applications: anode segmentation, spark quenching, photocathode efficiency and robustness for higher particle flux. An hexagonal segmented multipad prototype is tested in the beam with a time resolution of ~36 ps in the central pad. The operation in high rate environments is studied with different resistive strip and floating strip anodes resistive detectors in muon and pion beams. Time resolutions significantly under 100 ps and stable operation in the pion beam are achieved with all resistive prototypes. Robust photocathode materials, as an alternative to CsI, are investigated to reduce degradation from the ion-backflow generated in the pre-amplification avalanche. The most promising materials are diamond-like carbon (DLC) and boron carbide (B4C). Considering all the results achieved, two application cases are projected with the PICOSEC-Micromegas detector. The first one is the use in a calorimeter as a timing layer. Many secondary particles are produced in an electromagnetic calorimeter after few radiation lengths and a time resolution down to ~5 ps is expected with the PICOSEC-Micromegas. The second one is particle identification trough time-of-flight (TOF) measurements. The PICOSEC-Micromegas is expected to double the momentum range of current TOF detectors for π/Κ separation with 3σ
Styles APA, Harvard, Vancouver, ISO, etc.
17

Thalhammer, Christof Verfasser], Thoralf [Akademischer Betreuer] Niendorf, Oliver [Akademischer Betreuer] [Benson et Uwe [Akademischer Betreuer] Pietrzyk. « Improving the light yield and timing resolution of scintillator-based detectors for positron emission tomography / Christof Thalhammer. Gutachter : Thoralf Niendorf ; Oliver Benson ; Uwe Pietrzyk ». Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://d-nb.info/107445930X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Thalhammer, Christof [Verfasser], Thoralf [Akademischer Betreuer] Niendorf, Oliver [Akademischer Betreuer] Benson et Uwe [Akademischer Betreuer] Pietrzyk. « Improving the light yield and timing resolution of scintillator-based detectors for positron emission tomography / Christof Thalhammer. Gutachter : Thoralf Niendorf ; Oliver Benson ; Uwe Pietrzyk ». Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://nbn-resolving.de/urn:nbn:de:kobv:11-100231219.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Thalhammer, Christof [Verfasser], Thoralf Akademischer Betreuer] Niendorf, Oliver [Akademischer Betreuer] [Benson et Uwe [Akademischer Betreuer] Pietrzyk. « Improving the light yield and timing resolution of scintillator-based detectors for positron emission tomography / Christof Thalhammer. Gutachter : Thoralf Niendorf ; Oliver Benson ; Uwe Pietrzyk ». Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://d-nb.info/107445930X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Yang, Hengzhao. « Task scheduling in supercapacitor based environmentally powered wireless sensor nodes ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48962.

Texte intégral
Résumé :
The objective of this dissertation is to develop task scheduling guidelines and algorithms for wireless sensor nodes that harvest energy from ambient environment and use supercapacitor based storage systems to buffer the harvested energy. This dissertation makes five contributions. First, a physics based equivalent circuit model for supercapacitors is developed. The variable leakage resistance (VLR) model takes into account three mechanisms of supercapacitors: voltage dependency of capacitance, charge redistribution, and self-discharge. Second, the effects of time and supercapacitor initial state on supercapacitor voltage change and energy loss during charge redistribution are investigated. Third, the task scheduling problem in supercapacitor based environmentally powered wireless sensor nodes is studied qualitatively. The impacts of supercapacitor state and energy harvesting on task scheduling are examined. Task scheduling rules are developed. Fourth, the task scheduling problem in supercapacitor based environmentally powered wireless sensor nodes is studied quantitatively. The modified earliest deadline first (MEDF) algorithm is developed to schedule nonpreemptable tasks without precedence constraints. Finally, the modified first in first out (MFIFO) algorithm is proposed to schedule nonpreemptable tasks with precedence constraints. The MEDF and MFIFO algorithms take into account energy constraints of tasks in addition to timing constraints. The MEDF and MFIFO algorithms improve the energy performance and maintain the timing performance of the earliest deadline first (EDF) and first in first out (FIFO) algorithms, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Golnik, Christian. « Treatment verification in proton therapy based on the detection of prompt gamma-rays ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-227948.

Texte intégral
Résumé :
Background The finite range of a proton beam in tissue and the corresponding steep distal dose gradient near the end of the particle track open new vistas for the delivery of a highly target-conformal dose distribution in radiation therapy. Compared to a classical photon treatment, the potential therapeutic benefit of a particle treatment is a significant dose reduction in the tumor-surrounding tissue at a comparable dose level applied to the tumor. Motivation The actually applied particle range, and therefor the dose deposition in the target volume, is quite sensitive to the tissue composition in the path of the protons. Particle treatments are planned via computed tomography images, acquired prior to the treatment. The conversion from photon stopping power to proton stopping power induces an important source of range-uncertainty. Furthermore, anatomical deviations from planning situation affect the accurate dose deposition. Since there is no clinical routine measurement of the actually applied particle range, treatments are currently planned to be robust in favor of optimal regarding the dose delivery. Robust planning incorporates the application of safety margins around the tumor volume as well as the usage of (potentially) unfavorable field directions. These pretreatment safety procedures aim to secure dose conformality in the tumor volume, however at the price of additional dose to the surrounding tissue. As a result, the unverified particle range constraints the principle benefit of proton therapy. An on-line, in-vivo range-verification would therefore bring the potential of particle therapy much closer to the daily clinical routine. Materials and methods This work contributes to the field of in-vivo treatment verification by the methodical investigation of range assessment via the detection of prompt gamma-rays, a side product emitted due to proton-tissue interaction. In the first part, the concept of measuring the spatial prompt gamma-ray emission profile with a Compton camera is investigated with a prototype system consisting of a CdZnTe cross strip detector as scatter plane and three side-by-side arranged, segmented BGO block detectors as absorber planes. In the second part, the novel method of prompt gamma-ray timing (PGT) is introduced. This technique has been developed in the scope of this work and a patent has been applied for. The necessary physical considerations for PGT are outlined and the feasibility of the method is supported with first proof-of-principle experiments. Results Compton camera: Utilizing a 22-Na source, the feasibility of reconstructing the emission scene of a point source at 1.275 MeV was verified. Suitable filters on the scatter-absorber coincident timing and the respective sum energy were defined and applied to the data. The source position and corresponding source displacements could be verified in the reconstructed Compton images. In a next step, a Compton imaging test at 4.44 MeV photon energy was performed. A suitable test setup was identified at the Tandetron accelerator at the Helmholtz-Zentrum Dresden-Rossendorf, Germany. This measurement setup provided a monoenergetic, point-like source of 4.44 MeV gamma-rays, that was nearly free of background. Here, the absolute gamma-ray yield was determined. The Compton imaging prototype was tested at the Tandetron regarding (i) the energy resolution, timing resolution, and spatial resolution of the individual detectors, (ii) the imaging capabilities of the prototype at 4.44 MeV gamma-ray energy and (iii) the Compton imaging efficiency. In a Compton imaging test, the source position and the corresponding source displacements were verified in the reconstructed Compton images. Furthermore, via the quantitative gamma-ray emission yield, the Compton imaging efficiency at 4.44 MeV photon energy was determined experimentally. PGT: The concept of PGT was developed and introduced to the scientific community in the scope of this thesis. A theoretical model for PGT was developed and outlined. Based on the theoretical considerations, a Monte Carlo (MC) algorithm, capable of simulating PGT distributions was implemented. At the KVI-CART proton beam line in Groningen, The Netherlands, time-resolved prompt gamma-ray spectra were recorded with a small scale, scintillator based detection system. The recorded data were analyzed in the scope of PGT and compared to the measured data, yielding in an excellent agreement and thus verifying the developed theoretical basis. For a hypothetical PGT imaging setup at a therapeutic proton beam it was shown, that the statistical error on the range determination could be reduced to 5 mm at a 90 % confidence level for a single spot of 5x10E8 protons. Conclusion Compton imaging and PGT were investigated as candidates for treatment verification, based on the detection of prompt gamma-rays. The feasibility of Compton imaging at photon energies of several MeV was proven, which supports the approach of imaging high energetic prompt $gamma$-rays. However, the applicability of a Compton camera under therapeutic conditions was found to be questionable, due to (i) the low device detection efficiency and the corresponding limited number of valid events, that can be recorded within a single treatment and utilized for image reconstruction, and (ii) the complexity of the detector setup and attached readout electronics, which make the development of a clinical prototype expensive and time consuming. PGT is based on a simple time-spectroscopic measurement approach. The collimation-less detection principle implies a high detection efficiency compared to the Compton camera. The promising results on the applicability under treatment conditions and the simplicity of the detector setup qualify PGT as method well suited for a fast translation towards a clinical trial
Hintergrund Strahlentherapie ist eine wichtige Modalität der therapeutischen Behandlung von Krebs. Das Ziel dieser Behandlungsform ist die Applikation einer bestimmten Strahlendosis im Tumorvolumen, wobei umliegendes, gesundes Gewebe nach Möglichkeit geschont werden soll. Bei der Bestrahlung mit einem hochenergetischen Protonenstrahl erlaubt die wohldefinierte Reichweite der Teilchen im Gewebe, in Kombination mit dem steilen, distalen Dosisgradienten, eine hohe Tumor-Konformalität der deponierten Dosis. Verglichen mit der klassisch eingesetzten Behandlung mit Photonen ergibt sich für eine optimiert geplante Behandlung mit Protonen ein deutlich reduziertes Dosisnivau im den Tumor umgebenden Gewebe. Motivation Die tatsächlich applizierte Reichweite der Protonen im Körper, und somit auch die lokal deponierte Dosis, ist stark abhängig vom Bremsvermögen der Materie im Strahlengang der Protonen. Bestrahlungspläne werden mit Hilfe eines Computertomographen (CT) erstellt, wobei die CT Bilder vor der eigentlichen Behandlung aufgenommen werden. Ein CT misst allerdings lediglich den linearen Schwächungskoeffizienten für Photonen in der Einheit Hounsfield Units (HU). Die Ungenauigkeit in der Umrechnung von HU in Protonen-Bremsvermögen ist, unter anderem, eine wesentliche Ursache für die Unsicherheit über die tatsächliche Reichweite der Protonen im Körper des Patienten. Derzeit existiert keine routinemäßige Methode, um die applizierte Dosis oder auch die Protonenreichweite in-vivo und in Echtzeit zu bestimmen. Um das geplante Dosisniveau im Tumorvolumen trotz möglicher Reichweiteunterschiede zu gewährleisten, werden die Bestrahlungspläne für Protonen auf Robustheit optimiert, was zum Einen das geplante Dosisniveau im Tumorvolumen trotz auftretender Reichweiteveränderungen sicherstellen soll, zum Anderen aber auf Kosten der möglichen Dosiseinsparung im gesunden Gewebe geht. Zusammengefasst kann der Hauptvorteil einer Therapie mit Protonen wegen der Unsicherheit über die tatsächlich applizierte Reichweite nicht wirklich realisiert. Eine Methode zur Bestimmung der Reichweite in-vivo und in Echtzeit wäre daher von großem Nutzen, um das theoretische Potential der Protonentherapie auch in der praktisch ausschöpfen zu können. Material und Methoden In dieser Arbeit werden zwei Konzepte zur Messung prompter Gamma-Strahlung behandelt, welche potentiell zur Bestimmung der Reichweite der Protonen im Körper eingesetzt werden können. Prompte Gamma-Strahlung entsteht durch Proton-Atomkern-Kollision auf einer Zeitskala unterhalb von Picosekunden entlang des Strahlweges der Protonen im Gewebe. Aufgrund der prompten Emission ist diese Form der Sekundärstrahlung ein aussichtsreicher Kandidat für eine Bestrahlungs-Verifikation in Echtzeit. Zum Einen wird die Anwendbarkeit von Compton-Kameras anhand eines Prototyps untersucht. Dabei zielt die Messung auf die Rekonstruktion des örtlichen Emissionsprofils der prompten Gammas ab. Zum Zweiten wird eine, im Rahmen dieser Arbeit neu entwickelte Messmethode, das Prompt Gamma-Ray Timing (PGT), vorgestellt und international zum Patent angemeldet. Im Gegensatz zu bereits bekannten Ansätzen, verwendet PGT die endliche Flugzeit der Protonen durch das Gewebe und bestimmt zeitliche Emissionsprofile der prompten Gammas. Ergebnisse Compton Kamera: Die örtliche Emissionsverteilung einer punktförmigen 22-Na Quelle wurde wurde bei einer Photonenenergie von 1.275 MeV nachgewiesen. Dabei konnten sowohl die absolute Quellposition als auch laterale Verschiebungen der Quelle rekonstruiert werden. Da prompte Gamma-Strahlung Emissionsenergien von einigen MeV aufweist, wurde als nächster Schritt ein Bildrekonstruktionstest bei 4.44 MeV durchgeführt. Ein geeignetes Testsetup wurde am Tandetron Beschleuniger am Helmholtz-Zentrum Dresden-Rossendorf, Deutschland, identifiziert, wo eine monoenergetische, punktförmige Emissionverteilung von 4.44 MeV Photonen erzeugt werden konnte. Für die Detektoren des Prototyps wurden zum Einen die örtliche und zeitliche Auflösung sowie die Energieauflösungen untersucht. Zum Anderen wurde die Emissionsverteilung der erzeugten 4.44 MeV Quelle rekonstruiert und die zugehörige Effizienz des Prototyps experimentell bestimmt. PGT: Für das neu vorgeschlagene Messverfahren PGT wurden im Rahmen dieser Arbeit die theoretischen Grundlagen ausgearbeitet und dargestellt. Darauf basierend, wurde ein Monte Carlo (MC) Code entwickelt, welcher die Modellierung von PGT Spektren ermöglicht. Am Protonenstrahl des Kernfysisch Verschneller Institut (KVI), Groningen, Niederlande, wurden zeitaufgelöste Spektren prompter Gammastrahlung aufgenommen und analysiert. Durch einen Vergleich von experimentellen und modellierten Daten konnte die Gültigkeit der vorgelegten theoretischen Überlegungen quantitativ bestätigt werden. Anhand eines hypothetischen Bestrahlungsszenarios wurde gezeigt, dass der statistische Fehler in der Bestimmung der Reichweite mit einer Genauigkeit von 5 mm bei einem Konfidenzniveau von 90 % für einen einzelnen starken Spot 5x10E8 Protonen mit PGT erreichbar ist. Schlussfolgerungen Für den Compton Kamera Prototyp wurde gezeigt, dass eine Bildgebung für Gamma-Energien einiger MeV, wie sie bei prompter Gammastrahlung auftreten, möglich ist. Allerdings erlaubt die prinzipielle Abbildbarkeit noch keine Nutzbarkeit unter therapeutischen Strahlbedingungen nicht. Der wesentliche und in dieser Arbeit nachgewiesene Hinderungsgrund liegt in der niedrigen (gemessenen) Nachweiseffizienz, welche die Anzahl der validen Daten, die für die Bildrekonstruktion genutzt werden können, drastisch einschränkt. PGT basiert, im Gegensatz zur Compton Kamera, auf einem einfachen zeit-spektroskopischen Messaufbau. Die kollimatorfreie Messmethode erlaubt eine gute Nachweiseffizienz und kann somit den statistischen Fehler bei der Reichweitenbestimmung auf ein klinisch relevantes Niveau reduzieren. Die guten Ergebnissen und die ausgeführten Abschätzungen für therapeutische Bedingungen lassen erwarten, dass PGT als Grundlage für eine Bestrahlungsverifiktation in-vivo und in Echtzeit zügig klinisch umgesetzt werden kann
Styles APA, Harvard, Vancouver, ISO, etc.
22

Enchakalody, Binu Eapen. « Timing-Pulse Measurement and Detector Calibration of the OsteoQuant ». Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1247359338.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Agapopoulou, Christina. « Search for supersymmetry with the ATLAS detector and development of the High Granularity Timing Detector ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP019.

Texte intégral
Résumé :
Le Modèle Standard de la physique des particules est un cadre théorique couronné d'un extrême succès, décrivant les particules élémentaires et leurs interactions. Avec la découverte du boson de Higgs par les expériences ATLAS et CMS en 2012, le Modèle Standard est désormais complet. Cependant, il demeure toujours des questions ouvertes, appelant un modèle théorique plus large qui englobe le Modèle Standard, tout en fournissant des mécanismes pour les phénomènes inexpliqués. La supersymétrie offre un tel cadre en introduisant une nouvelle symétrie entre bosons et fermions. Elle permet de résoudre le problème de la hiérarchie de la masse du boson de Higgs et offre également un candidat pour expliquer la matière noire de l'univers.La première partie de cette thèse est la recherche de supersymétrie avec le détecteur ATLAS au LHC, en utilisant l'ensemble des données du Run 2, dont la luminosité intégrée s'élève à 139 fb⁻¹. Mon travail se focalise sur la recherche de squarks et de gluinos, les super-partenaires des quarks et des gluons, dans des modèles où la R-parité est conservée et dans les états finaux comprenant des jets et une grande énergie transverse manquante. Ma principale contribution à cette analyse fut le développement et l'optimisation d'une nouvelle technique, nommée ajustement "Multi-Bin", pour améliorer la séparation du signal par rapport au bruit et étendre la portée d'exclusion de la recherche. Le gain attendu en la section efficace exclue par l'utilisation d'une configuration d'ajustement Multi-Bin, par opposition à l'approche traditionnelle "cut& count", a été estimé à 40-70% dans les modèles étudiés. De plus, j'ai travaillé sur l'inférence statistique de la recherche, allant de l'évaluation des diverses systématiques à l'interprétation des résultats dans différents modèles supersymétriques simplifiés. Aucun excès au-delà des prédictions du Modèle Standard n'a été trouvé, et, par conséquent, les squarks et les gluinos possédant des masses allant respectivement jusqu'à 1.85 TeV et 2.34 TeV ont été exclus. Ce résultat est une amélioration significative par rapport au cycle précédent de l'analyse, et l'une des contraintes les plus fortes sur les masses actuelles des squarks et des gluinos.La phase d'acquisition de données à haute luminosité (HL-LHC) verra le taux des collisions augmenter d'un facteur de 5 à 7. Afin d'atténuer l'augmentation de l'empilement, ATLAS installera un nouveau détecteur au silicium de haute granularité avec une très bonne résolution temporelle qui sera situé dans la région avant, le High Granularity Timing Detector (HGTD). L'objectif de ce détecteur est d'atteindre une résolution en temps meilleure que 50 ps par trace. La seconde partie de cette thèse porte sur deux aspects principaux du développement du HGTD. D'une part, j'ai effectué des études avec la simulation pour évaluer l’occupation et les besoins du système de lecture du détecteur avec diverses géométries. L'occupation du détecteur doit rester inférieure à 10 %, afin de pouvoir correctement attribuer les dépôts d'énergie des traces traversant le détecteur. Il a été constaté que cette limite était satisfaite avec une taille de capteur de 1.3x 1.3 mm², qui est désormais la référence pour le futur détecteur. De plus, l'organisation du système de lecture a été optimisée afin de maximiser l'espace disponible et de minimiser les composants nécessaires. La performance de tout détecteur au silicium est fortement liée à la conception du circuit électronique front-end. Dans le cadre de mon travail à HGTD, j'ai également participé à la caractérisation de deux prototypes électroniques front-end, ALTIROC0 et ALTIROC1, à la fois en laboratoire avec un système d'étalonnage et en tests faisceaux avec des électrons et des protons de haute énergie. La résolution temporelle obtenue était inférieure à 55 ps dans tous les appareils testés, la meilleure performance obtenue étant de 34 ps
The Standard Model of particle physics is an extremely successful theoretical framework, describing the elementary particles and their interactions.With the discovery of the Higgs boson by the ATLAS and CMS experiments in 2012, the Standard Model is now complete. However, open questions remain unanswered, calling for a larger theoretical model that encapsulates the Standard Model, while providing mechanisms for the unexplained phenomena. Supersymmetry offers such a framework by introducing a new symmetry between bosons and fermions. It provides potential solutions to the hierarchy problem for the Higgs boson mass and also offers a candidate to explain the dark matter of the universe.The first part of this thesis is the search for supersymmetry with the ATLAS detector at LHC, using the full dataset of Run 2, amounting to an integrated luminosity of 139 fb⁻¹. The focus is on the search for squarks and gluinos, the "super-partners" of quarks and gluons, in models where R-parity is conserved and in final states with jets and large missing transverse momentum. My main contribution to this analysis was the development and optimization of a novel technique named Multi-Bin fit to enhance the signal to background separation and extend the exclusion reach of the search. The expected gain in the excluded cross section from using a Multi-Bin fit configuration, opposed to the traditional "cut&count" approach, was estimated to be 40 - 70 % in the studied models. In addition, I worked on the statistical inference of the search, ranging from the evaluation of various systematics to the interpretation of the results in various simplified supersymmetric models. No excess above the Standard Model prediction was found and therefore squarks and gluinos with masses up to 1.85 TeV and 2.34 TeV were excluded, respectively. This result is a significant improvement over the previous round of the analysis and one of the strongest constraints on squark and gluino masses today.The high-luminosity data acquisition phase (HL-LHC) will see an increase of the collision rate by a factor of 5 to 7. In order to mitigate the increase of pile-up, ATLAS will install a new highly granular silicon detector with a very good time resolution that would be located at the forward region, the High Granularity Timing Detector (HGTD). The goal of this detector is to provide a time resolution better than 50 ps per track. The second part of this thesis focuses on two main aspects in the development of HGTD. On one hand, I performed simulation studies to evaluate the occupancy and read-out requirements of the detector under various geometries. The occupancy of the detector must remain below 10%, in order to correctly assign energy deposits to tracks crossing the detector. It was found that this requirement was met with a sensor size of 1.3 x 1.3 mm², which is now the baseline for the future detector. Additionally, the organization of the on-detector read-out system was optimised, in order to maximise the available space and minimise the necessary components. The performance of any silicon detector is strongly linked to the design of the front-end electronic circuit. As part of my work in HGTD, I also participated in the characterization of two front-end electronic prototypes, ALTIROC0 and ALTIROC1, both in laboratory with a calibration system and in testbeam with highly energetic electrons and protons. The temporal resolution was found to be better than 55 ps in all tested devices, with a best achieved performance of 34 ps
Styles APA, Harvard, Vancouver, ISO, etc.
24

Holmberg, Mei-Li. « Studies of Low Gain Avalanche Detector prototype sensors for the ATLAS High-Granularity Timing Detector ». Thesis, KTH, Fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253906.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Poll, Tony. « A novel algorithm for exotic particle reconstruction using detector timing ». Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.685553.

Texte intégral
Résumé :
The Standard Model of particle physics has been verified by many experiments yet leaves some questions unanswered. Many theories for physics beyond the Standard Model predict long lived neutral particles. The high energy proton-proton collisions supplied by the Large Hadron Collider provide fresh opportunities to search for new particles using the Compact Muon Solenoid particle detector. A new algorithm has been developed that uses the timing of energy deposits in the detector's electromagnetic calorimeter to accurately reconstruct long lived neutral particles decaying within the tracker volume into an electron and a positron. From simulations the algorithm can reconstruct a long lived particle's decay length with a correlation coefficient of up to +0.88 to the true decay length. Using data from proton-proton collisions recorded by the CMS detector at √s = 8 TeV, corresponding to 19.8 fb-1 of integrated luminosity, a preliminary search for new physics was conducted. This tool is now available to future particle searches, significantly extending the search space provided by more traditional techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Nash, Jonathan. « Application of the direct timing method in the ZEUS Central Tracking Detector ». Thesis, University of Oxford, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276830.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Allaire, Corentin. « ATLAS : Search for Supersymmetry and optimization of the High Granularity timing detector ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS316/document.

Texte intégral
Résumé :
Le Modèle Standard de la physique des particules a jusqu’alors extrêmement bien réussi à décrire les particules élémentaires et leurs interactions. Malgré cela, il demeure toujours des questions ouvertes. La possibilité de répondre à ces questions grâce la Supersymétrie est actuellement à l’étude dans les collisions proton-proton à 13 TeV dans le cadre de l’expérience ATLAS au LHC. Cette thèse présente la recherche de la production de paires de particules colorées dans ATLAS, ces dernières se désintégrant en paires de jets. Pour ce faire, les données de 2016, 2017 et 2018 ont été utilisées. De telles particules échappent aux recherches standards de la Supersymétrie du fait de l’absence d’énergie transverse manquante dans l’état final. Deux signatures furent considérées, la désintégration de stops via des couplages violant la R-parité et la production de sgluon, le partenaire scalaire du gluino. En l’absence de signal, une amélioration de 200 GeV sur la masse maximum exclue est attendue. Le HL-LHC augmentera la luminosité intégrée délivrée afin de nous permettre de rechercher des particules plus massives et d'améliorer les mesures de précision du Modèle Standard. La luminosité instantanée augmentera d’un facteur 5 et une luminosité intégrée de 4000 fb⁻¹ devrait pouvoir être atteinte à la fin du LHC en 2037.Cette thèse présente également une étude des perspectives de mesure des couplages du Higgs au HL-LHC effectuée à l’aide de SFitter. Il est démontré que dans le cadre des Delta et d’une EFT, l’augmentation de la luminosité génère une amélioration de la précision de la mesure des couplages. Finalement, le Détecteur de temps fortement segmenté, qui sera installé dans ATLAS au HL-LHC, est présenté. La simulation de ce détecteur a été développée pour prendre en compte la résolution temporelle du détecteur et fut utilisée pour optimiser sa géométrie. Les performances de ce détecteur ont été étudiées, plus de 80 % des traces ont leurs temps correctement associés avec une résolution de 20 ps avant irradiation de 50 ps après. En utilisant les informations temporelles, l’isolation des électrons peut être amélioré de 10 %
The Standard Model of particle physics has been extremely successful in describing the elementary particles and their interactions. Nevertheless, there are open questions that are left unanswered. Whether supersymmetry can provide answers to some of these is being studied in 13 TeV proton-proton collisions in the ATLAS experiment at the LHC. In this thesis a search for pair produced colored particles in ATLAS decaying into pairs of jets using data from 2016, 2017 and 2018 is presented. Such particles would escape standard Supersymmetry searches due to the absence of missing transverse energy in the final state. Stops decaying via a R-parity violating coupling and sgluon, scalar partners of the gluino, were considered. In the absence of a signal, an improvement of 200 GeV on the limit on the stop mass is expected. The HL-LHC will increase the integrated luminosity delivered to probe even higher mass ranges as well as improving the precision of Standard model measurements. The instantaneous luminosity will be increased by a factor 5 and an integrated luminosity of 4000 fb⁻¹ should be reached by the end of the LHC in 2037.A study of the Higgs coupling measurement prospects at the HL-LHC using SFitter is performed. Using the Delta and EFT framework shows that the increase in luminosity will result in a significant improvement of the precision of the measurement of the couplings. The High granularity timing detector detector will be installed in ATLAS for the HL-LHC. A simulation of the detector that takes into account the timing resolution was developed and used to optimize its layout. The detector performance was studied. More than 80 % of the tracks have their time correctly reconstructed with a resolution of 20 ps before irradiation and 50 ps after. Using the timing information, the electron isolation efficiency is improved by 10 %
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zimmermann, Sebastian [Verfasser]. « Development of the Fast Timing PANDA Barrel Time-of-Flight Detector / Sebastian Zimmermann ». Gießen : Universitätsbibliothek, 2021. http://d-nb.info/1230476105/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Cendes, Yvette N. « An Extended Study on the Effects of Incorrect Coordinates on Surface Detector Timing ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1310742128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Backman, Filip. « Analysis of Test Beam Data for Sensors in the High Granularity Timing Detector ». Thesis, KTH, Fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210240.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

PIGAZZINI, SIMONE. « Search for anomalous production of high energy photon events with the CMS detector at the LHC and prospects for HL-LHC ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/198972.

Texte intégral
Résumé :
Sebbene lo Standard Model (SM) descriva con grande successo le interazioni fondamentali della materia, esso non fornisce la risposta a diverse domande ancora aperte nella fisica fondamentale tra cui la natura della materia oscura, il problema della gerarchia delle interazioni fondamentali e non fornisce un modello quantistico della gravitazione. Per questo motivo diversi modelli mirano a completare lo SM (modelli con extra dimensioni, modelli super-simmetrici, ...). Denominatore comune di questi modelli (denominati generalmente BSM) è la predizione dell'esistenza di nuove particelle di massa dell'ordine di 1 TeV. La ricerca di produzione di risonante di bosoni o fermioni nelle collisioni protone-protone a LHC è una verifica diretta di questi modelli. La produzione risonante di coppie di fotoni può sondare l'esistenza di bosoni di spin-0, spin-2. L'eccellente risoluzione che è possibile raggiungere sulla misura della massa invariante dei due fotoni, e la segnatura peculiare del processo, permettono di cercare un picco di segnale nello spettro di massa continuo prodotto da processi descritti dallo SM. La risoluzione sulla massa invariante del sistema dei due fotoni è determinata da due fattori: la risoluzione energetica sui singoli fotoni e l'efficienza nella ricostruzione del corretto vertice di interazione da cui originano i fotoni. La ricerca è stata condotta sui dati raccolti in collisioni protone-protone a 13 TeV effettuate da LHC durante il 2016 (luminosità integrata pari a 35.9fb$^{-1}$). L'aumento dell'energia disponibile nel centro di massa della collisione ha permesso di esplorare una regione dello spettro più ampia di quella analizzata nelle ricerca in collisioni a 8TeV raccolti nel periodo 2011-2012. I risultati ottenuti non hanno evidenziato nessuna deviazione rispetto alla previsione del SM. Sono stati quindi fissati dei limiti di esclusione sulle sezioni d'urto per la produzione di gravitoni del tipo previsto dai modelli Randall-Sudrum I limiti variano tra tra 10fb e 1fb a seconda della massa prevista nell'intervallo $0.5 \mbox{TeV} < m < 4 \mbox{TeV}$. I risultati sono compatibili con le osservazioni dell'esperimento ATLAS. Il programma di LHC prevede una fase ad alta luminosità che inizierà nel 2026 durante la quale il complesso di acceleratori del CERN verrà migliorato fino a raggiungere una luminosità istantanea di $7.5\times10^{34}\mbox{cm}^{-2}\mbox{s}^{-1}$, cinque volte maggiore rispetto a quella raggiunta attualmente. A questo rinnovamento degli acceleratori sarà associata una revisione degli esperimenti che prevede il miglioramento dei rivelatori già esistenti e l'installazione di nuovi. Ai benefici indotti dall'aumento del numero di eventi disponibili per le analisi si oppone un generale degredamento della ricostruzione a causa dell'alto numero di collisioni che avverranno simultaneamente. Per mitigare questo fenomeno e per massimizzare l'accettanza ai canali di interesse per le misure di fisica CMS sta programmando un serie di interventi al rivelatore. Tra questi l'introduzione della misura di tempo nella ricostruzione richiede la costruzione ed installazione di un rivelatore di particelle cariche con risoluzione termporale di 30~ps. La tecnologia grazie ad una serie di test condotti con fasci di particelle in cui è stato anche dimostrato che l'attuale calorimetro elettromagnetico di CMS, con un opportuno miglioramento dell'elettronica di lettura, può raggiungere una risoluzione di 30~ps per energie maggiori di 20~GeV. Lo studio per la definizione del rivelatore è accompagnato da studi di simulazione volti a evidenziare il guadagno indotto dall'uso del tempo nella ricostruzione degli eventi. Questi studi hanno dimostrato un generale miglioramento nell'efficienza di ricostruzione di osservabili di interesse per la fisica che verrà esplorata nella fase ad alta luminosità.
Although the Standard Model of particle physics (SM) describes with extreme success the fundamental interactions of matter it does not provide a solution for open questions of modern physics. The nature of cosmological dark matter, a quantum description of gravity and the hierarchy problem cannot included in the framework of the SM. For this reason several extensions have been proposed throughout the years to address these open problems. The beyond the standard model (BSM) frameworks often predict the existence of additional particles, either arising from additional symmetries introduced by the model or by the inclusion of gravity. Part of the parameter space of these models can be covered by experiments at LHC, since the predicted particles can have masses in the TeV range. The diphoton resonant production is sensitive to spin-0 and spin-2 BSM resonances. These can be originated by wrapped extra dimensions or extension of the Higgs sector which are typically included in BSM models. The excellent energy resolution achieved with the CMS electromagnetic calorimeter (ECAL) and the clean signature of the dipho- ton events makes this channel very attractive as a tool for the search of exotic resonances. The sensitivity of the search in the diphoton channel is subordinated to the ECAL energy resolution and the precision on the location of the interaction vertex. The search pre- sented in this work has been conducted on data collected by the CMS experiment at LHC with proton-proton collisions at a center-of-mass energy of 13 TeV, for a total integrated luminosity of 35.9fb −1 . No significant deviation from the Standard Model prediction has been highlighted by the analysis, thus exclusion limits on the graviton production cross- section have been established in the context of the Randall-Sundrum extra dimensions model. The limits varies between 6 fb and 0.1 fb depending on the mass and coupling of the resonance in the 0.5 < m < 4.5 TeV and 0.01 < κ < 0.2 ranges. The LHC program foresees an high luminosity phase starting from 2026 (HL-LHC), during which the instantaneous luminosity will reach the record value of 7.5×10 34 cm −2 s −1 , five times the current one. On one hand higher instantaneous luminosity will bring benefits to the physics analysis by providing a dataset 10 times larger than what will be available during the LHC phase but, on the other hand will pose severe challenges to the event reconstruction given the high number of overlapping collisions. CMS is already planning various actions and detector upgrades to match the physics goal of HL-LHC. Among those the introduction of time into the event reconstruction will require the installation of a completely new detector. Technologies suitable for the measurement of charged particles time with a precision of 30 ps have been identified through a series of tests with particles beam. In the same tests the intrinsic time resolution of the ECAL has been proved to be better than 20 ps for electrons and photons of at least 25 GeV. The R&D campaign has been coupled to simulation studies to quantify the expected gain in performance provided by a time-aware event reconstruction. The simulation studies show a general improvement for observable of interest for the HL-LHC physics program.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Szadaj, Antek. « Performance studies of Low-Gain Avalanche Diodes for the ATLAS High-Granularity Timing Detector ». Thesis, KTH, Fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240145.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Tsivras, Sotirios-Ilias. « ALTO Timing Calibration : Calibration of the ALTO detector array based on cosmic-ray simulations ». Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-79429.

Texte intégral
Résumé :
This thesis describes a timing calibration method for the detector array of the ALTO experiment. ALTO is a project currently at the prototype phase that aims to build a gamma-ray astronomical observatory at high-altitude in the Southern hemisphere. ALTO can be assumed as a hybrid system as each detector consists of a Water Cherenkov Detector (WCD) on top of a Scintillator Detector (SD), providing an increased signal to background discrimination compared to other WCD arrays. ALTO is planned to complement the Very-High-Energy (VHE) observations by the High Altitude Water Cherenkov (HAWC) gamma ray observatory that collects data from the Northern sky. By the time the full array of 1242 detectors is installed to the proposed site, ALTO together with HAWC and the future Cherenkov Telescope Array (CTA) will serve as a state-of-the-art detection system for VHE gamma-rays combining the WCD and the Imaging Atmospheric Cherenkov Telescope (IACT) techniques. When a VHE gamma-ray or cosmic-ray enters the Earth’s atmosphere, it initiates an Extensive Air Shower (EAS). These particles are sampled by the detector array and by checking the arrival times of nearby tanks, the method reveals whether a detector suffers from a time-offset. The data analyzed in this thesis derive from CORSIKA (COsmic Ray SImulation for KAscade) and GEANT4 (GEometry ANd Tracking) simulations of cosmic-ray events within the energy range of 1–1:6TeV, which mainly consist of protons. The high flux of this particular type of cosmic-rays, gives us a tool to statistically evaluate the results generated by the proposed timing calibration method. In the framework of this thesis, I have written code in Python programming language in order to develop the timing calibration method. The method identifies detectors that suffer from time-offsets and improves the reconstruction accuracy of the ALTO detector array. Different Python packages were used to execute different tasks: astropy to read filter-present-write large datasets, numpy (Numerical Python) to make datasets comprehensiveto functions, scipy (Scientific Python) to develop our models, sympy (Symbolic Python) to find geometrical correlations and matplotlib (Mathematical Plotting Library) to draw figures and diagrams. The current version of the method achieves sub-nanosecond accuracy. The next stepis to make the timing calibration more intelligent in order to correct itself. This self correction includes an agile adaptation to the data acquired for long periods of time, in order to make different compromises at different time intervals.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Peille, Philippe. « Développement d'un simulateur pour le X-ray integral field unit : du signal astrophysique à la performance instrumentale ». Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30236/document.

Texte intégral
Résumé :
Cette thèse est consacrée au développement d'un modèle End-to-End pour le spectrocalorimètre X-IFU qui observera à partir de 2028 l'Univers en rayons X avec une précision jamais atteinte auparavant. Ce travail s'est essentiellement organisé en deux parties. J'ai dans un premier temps étudié la dynamique des parties les plus internes des binaires X de faible masse à l'aide de deux sondes particulières que sont les sursauts X et les oscillations quasi-périodiques au kHz (kHz QPOs). En me basant sur les données d'archive du satellite Rossi X-ray Timing Explorer et sur des méthodes d'analyse spécifiquement développées dans ce but, j'ai notamment pu mettre en évidence pour la première fois une réaction du premier sur le second, confirmant le lien très étroit entre ces oscillations et les parties les plus internes du système. Le temps de rétablissement du système suite aux sursauts entre également en conflit dans la plupart des cas avec l'augmentation supposée du taux d'accrétion suite à ces explosions. Au travers d'une analyse spectro-temporelle complète des deux kHz QPOs de 4U 1728-34, j'ai également pu confirmer l'incompatibilité des spectres de retard des deux QPOs qui suggère une origine différente de ces deux oscillations. L'étude de leurs spectres de covariance, obtenus pour la première fois dans cette thèse, a quant à elle mis en évidence le rôle central de la couche de Comptonisation et potentiellement celui d'une zone particulièrement compacte de la couche limite pour l'émission des QPOs. Dans le second volet de ma thèse, j'ai développé un simulateur End-to-End pour l'instrument X-IFU permettant de représenter l'ensemble du processus menant à une observation scientifique en rayons X, de l'émission des photons par une source jusqu'à leur mesure finale à bord du satellite. J'ai notamment mis en place des outils permettant la comparaison précise de plusieurs matrices de détecteurs en prenant en compte les effets de la reconstruction du signal brut issu des électroniques de lecture. Cette étude a mis en évidence l'intérêt de configurations hybrides, contenant une sous-matrice de petits pixels capables d'améliorer par un ordre de grandeur la capacité de comptage de l'instrument. Une solution alternative consisterait à défocaliser le miroir lors de l'observation de sources ponctuelles brillantes. Situées au coeur de la performance du X-IFU, j'ai également comparé de manière exhaustive différentes méthodes de reconstruction des signaux bruts issus des détecteurs X-IFU. Ceci a permis de montrer qu'à faible coût en termes de puissance de calcul embarquée, une amélioration significative de la résolution en énergie finale de l'instrument pouvait être obtenue à l'aide d'algorithmes plus sophistiqués. En tenant compte des contraintes de calibration, le candidat le plus prometteur apparaît aujourd'hui être l'analyse dans l'espace de résistance. En me servant de la caractérisation des performances des différents types de pixels, j'ai également mis en place une méthode de simulation rapide et modulable de l'ensemble de l'instrument permettant d'obtenir des observations synthétiques à long temps d'exposition de sources X très complexes, représentatives des futures capacités du X-IFU. Cet outil m'a notamment permis d'étudier la sensibilité de cet instrument aux effets de temps mort et de confusion, mais également d'estimer sa future capacité à distinguer différents régimes de turbulence dans les amas de galaxies et de mesurer leur profil d'abondance et de température. A plus long terme ce simulateur pourra servir à l'étude d'autres cas scientifiques, ainsi qu'à l'analyse d'effets à l'échelle de l'ensemble du plan de détection tels que la diaphonie entre pixels
This thesis is dedicated to the development of an End-ta-End model for the X-IFU spectrocalorimeter scheduled for launch in 2028 on board the Athena mission and which will observe the X-ray universe with unprecedented precision. This work has been mainly organized in two parts. I studied first the dynamics of the innermost parts of low mass X-ray binaries using two specific probes of the accretion flow: type I X-ray bursts and kHz quasi-periodic oscillations (kHz QPOs). Starting from the archivai data of the Rossi X-ray Timing Explorer mission and using specific data analysis techniques, I notably highlighted for the first time a reaction of the latter to the former, confirming the tight link between this oscillation and the inner parts of the system. The measured recovery time was also found in conflict with recent claims of an enhancement of the accretion rate following these thermonuclear explosions. From the exhaustive spectral timing analysis of both kHz QPOs in 4U 1728-34, I further confirmed the inconsistancy of their lag energy spectra, pointing towards a different origin for these two oscillations. The study of their covariance spectra, obtained here for the first time, has revealed the key role of the Comptonization layer, and potentially of a more compact part of it, in the emission of the QPOs. In the second part of my thesis, I focused on the development of an End-to-:End simulator for the X-IFU capable of depicting the full process leading to an X-ray observation, from the photon emission by the astrophysical source to their on-board detection. I notably implemented tools allowing the precise comparison of different potential pixel array configurations taking into account the effects of the event reconstruction from the raw data coming from the readout electronics. This study highlighted the advantage of using hybrid arrays containing a small pixel sub-array capable of improving by an order of magnitude the count rate capability of the instrument. An alternative solution would consist in defocusing the mirror during the observation of bright point sources. Being a key component of the overall X-IFU performance, I also thoroughly compared different reconstruction methods of the pixel raw signal. This showed that with a minimal impact on the required on-board processing power, a significant improvement of the final energy resolution could be obtained from more sophisticated reconstruction methods. Taking into account the calibration constraints, the most promising candidate currently appears to be the so-called "resistance space analysis". Taking advantage of the obtained performance characterization of the different foreseen pixel types, I also developed a fast and modular simulation method of the complete instrument providing representative synthetic observations with long exposure times of complex astrophysical sources suffinguish different turbulence regimes in galaxy clusters and to measure abundance and temperature profiles. In the longer run, this simulator will be useful for the study of other scientific cases as well as the analysis of instrumental effects at the full detection plane level such as pixel crosstalk
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wilkinson, Christopher Richard. « The application of high precision timing in the high resolution fly's eye cosmic ray detector ». Title page, contents and abstract only, 1998. http://hdl.handle.net/2440/37715.

Texte intégral
Résumé :
This thesis represents work performed by the author on the development of the High Resolution Fly's Eye (HiRes) detector for the study of extremely high energy (>10 [superscript 18] eV) cosmic rays. Chapter 1 begins with an review of this field. This chapter details the development of the field, the physics questions we seek to answer, and our current understanding based on experimental and theoretical results. It provides the basis for understanding why detectors such as HiRes are being constructed. This review leads into chapter 2, which discuses the development of cosmic ray induced extensive air showers (EAS) and the techniques used to study them. Particular emphasis is placed upon the air fluorescence technique utilised by HiRes. The two site HiRes prototype detector is then discussed in detail in chapter 3. This covers the different components that form the detector, together with details of the calibration performed to extract useful information from the data. Chapter 4 discusses the installation and subsequent testing of GPS based clock systems for the two sites that make up the HiRes prototype detector. The entire timing system was checked, and some previously hidden bugs fixed. This chapter concludes with work performed on the time to digital converter calibration for the second HiRes site. The high relative timing accuracy provided by the GPS clocks allowed the use of timing information in programs to reconstruct the arrival directions of cosmic rays. Chapter 5 covers the development of a program to use geometrical and timing information to reconstruct EAS viewed by both HiRes sites. This chapter concludes with an evaluation of the likely reconstruction accuracy of the new HiRes (stage1) detector. A well reconstructed EAS trajectory is the first step in the determination of more interesting parameters such as primary particle energy. Chapter 6 covers the collation and analysis of EAS viewed by the both sites of the prototype detector. This includes an evaluation of effects such as the atmosphere, and an estimation of the performance of the new (stage 1) HiRes detector based on results with the prototype detector. Finally the conclusions from this thesis are summarised and sugestions made for further follow up work.
Thesis (Ph.D.)--Department of Physics and Mathematical Physics, 1998.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Flick, Tobias. « Studies on the optical readout for the ATLAS Pixel Detector systematical studies on the functions of the back of crate card and the timing of the Pixel Detector / ». [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982435762.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Egidos, Plaja Núria. « On the digital design and verification of pixel detector ASICs for fast timing applications and other fields of science ». Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/671794.

Texte intégral
Résumé :
La contribución principal de esta tesis consiste en el diseño, implementación y verificación, mediante el uso de herramientas digitales, de una red de distribución de reloj para FastICpix, un detector píxel híbrido capaz de procesar fotones individualmente. Esta red distribuye una referencia temporal de baja frecuencia (decenas de MHz) a la matriz de píxeles, un reloj para el mecanismo de etiquetado temporal de la llegada de fotones. FastICpix se adapta en área y tamaño de píxel para optimizar la captura de carga eléctrica según la aplicación, y proporciona una fina resolución temporal (10 psRMS) en la detección de fotones individuales. Para cumplir estos requisitos, la red se puede escalar en área y adaptar al tamaño del píxel; y proporciona un fino ajuste de fase (resolución de 20 ps) en la distribución del reloj. Aunque el diseño que se propone no ha sido fabricado en silicio por el momento, se presentan simulaciones digitales anotadas con los retrasos de propagación asociados a las capacidades y resistencias parásitas presentes en el circuito, que ha sido implementado en el nodo de 65nm. La arquitectura seleccionada cumple con los requisitos de resolución temporal y el consumo de potencia estimado de la red no es la contribución dominante en el consumo total del chip. Se proporciona pautas para escalar este diseño al resto de geometrías contempladas en el proyecto FastICpix. Por otro lado, también se ha implementado una estructura de verificación, basada en la Metodología Universal de Verificación, para CLICTD, un sensor monolítico segmentado y chip de lectura destinado al experimento Colisionador Linear Compacto. Este chip ha sido fabricado en un proceso de imagen CMOS de 180nm modificado. La aplicación de esta verificación exhaustiva y automatizada permitió corregir pequeños errores de diseño, lo cual contribuyó a la exitosa operación del chip una vez fabricado.
La contribució principal consisteix en el disseny, implementació i verificació, mitjançant l’ús d’eines digitals, d’una xarxa de distribució de rellotge per a FastICpix, un detector píxel híbrid que processa fotons individualment. Aquesta xarxa distribueix una referència temporal de baixa freqüència (desenes de MHz) a la matriu de píxels, un rellotge que s’empra al mecanisme d’etiquetatge temporal de l’arribada de fotons. FastICpix s’adapta en àrea i mida del píxel per optimitzar la captura de càrrega elèctrica segons l’aplicació, i proporciona una fina resolució temporal (10 psRMS) en la detecció de fotons individuals. Per tal de complir aquests requisits, la xarxa es pot escalar en àrea i adaptar a la mida del píxel; i proporciona un ajustament fi de la fase (resolució de 20 ps) en la distribució del rellotge. Tot i que el disseny que es proposa no ha sigut fabricat en silici encara, es presenten simulacions digitals anotades amb els temps de propagació associats a les capacitats i resistències paràsites presents al circuit, que s’ha implementat al node de 65nm. L’arquitectura seleccionada compleix els requisits de resolució temporal i el consum de potència estimat de la xarxa no és la contribució dominant al total del consum del xip. Es proporciona pautes per escalar aquest disseny a la resta de geometries previstes al projecte FastICpix. D’altra banda, també s’ha implementat una estructura de verificació, basada en la Metodologia de Verificació Universal, per CLICTD, un sensor monolític segmentat i xip de lectura destinat al detector de silici de trajectòries per l’experiment Col·lisionador Linear Compacte. Aquest xip s’ha fabricat en un procés d’imatge CMOS de 180nm modificat. L’aplicació d’aquesta verificació exhaustiva i automatitzada va permetre corregir petits errors de disseny, la qual cosa va contribuir a l'exitosa operació del xip un cop fabricat.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Aliaga, Varea Ramón José. « Development of a data acquisition architecture with distributed synchronization for a Positron Emission Tomography system with integrated front-end ». Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/63271.

Texte intégral
Résumé :
[EN] Positron Emission Tomography (PET) is a non-invasive nuclear medical imaging modality that makes it possible to observe the distribution of metabolic substances within a patient's body after marking them with radioactive isotopes and arranging an annular scanner around him in order to detect their decays. The main applications of this technique are the detection and tracing of tumors in cancer patients and metabolic studies with small animals. The Electronic Design for Nuclear Applications (EDNA) research group within the Instituto de Instrumentación para Imagen Molecular (I3M) has been involved in the study of high performance PET systems and maintains a small experimental setup with two detector modules. This thesis is framed within the necessity of developing a new data acquisition system (DAQ) for the aforementioned setup that corrects the drawbacks of the existing one. The main objective is to define a DAQ architecture that is completely scalable, modular, and guarantees the mobility and the possibility of reusing its components, so that it admits any extension of modification of the setup and it is possible to export it directly to the configurations used by other groups or experiments. At the same time, this architecture should be compatible with the best possible resolutions attainable at the present instead of imposing artificial limits on system performance. In particular, the new DAQ system should outperform the previous one. As a first step, a general study of DAQ arquitectures is carried out in the context of experimental setups for PET and other high energy physics applications. On one hand, the conclusion is reached that the desired specifications require early digitization of detector signals, exclusively digital communication between modules, and the absence of a centralized trigger. On the other hand, the necessity of a very precise distributed synchronization scheme between modules becomes apparent, with errors in the order of 100 ps, and operating directly over the data links. A study of the existing methods reveals their severe limitations in terms of achievable precision. A theoretical analysis of the situation is carried out with the goal of overcoming them, and a new synchronization algorithm is proposed that is able to reach the desired resolution while getting rid of the restrictions on clock alignment that are imposed by virtually all usual schemes. Since the measurement of clock phase difference plays a crucial role in the proposed algorithm, extensions to the existing methods are defined and analyzed that improve them significantly. The proposed scheme for synchronism is validated using commercial evaluation boards. Taking the proposed synchronization method as a starting point, a DAQ architecture for PET is defined that is composed of two types of module (acquisition and concentration) whose replication makes it possible to arrange a hierarchic system of arbitrary size, and circuit boards are designed and commissioned that implement a realization of the architecture for the particular case of two detectors. This DAQ is finally installed at the experimental setup, where their synchronization properties and resolution as a PET system are characterized and its performance is verified to have improved with respect to the previous system.
[ES] La Tomografía por Emisión de Positrones (PET) es una modalidad de imagen médica nuclear no invasiva que permite observar la distribución de sustancias metabólicas en el interior del cuerpo de un paciente tras marcarlas con isótopos radioactivos y disponer después un escáner anular a su alrededor para detectar su desintegración. Las principales aplicaciones de esta técnica son la detección y seguimiento de tumores en pacientes con cáncer y los estudios metabólicos en animales pequeños. El grupo de investigación Electronic Design for Nuclear Applications (EDNA) del Instituto de Instrumentación para Imagen Molecular (I3M) ha estado involucrado en el estudio de sistemas PET de alto rendimiento y mantiene un pequeño setup experimental con dos módulos detectores. La presente tesis se enmarca dentro de la necesidad de desarrollar un nuevo sistema de adquisición de datos (DAQ) para dicho setup que corrija los inconvenientes del ya existente. En particular, el objetivo es definir una arquitectura de DAQ que sea totalmente escalable, modular, y que asegure la movilidad y la posibilidad de reutilización de sus componentes, de manera que admita cualquier ampliación o alteración del setup y pueda exportarse directamente a los de otros grupos o experimentos. Al mismo tiempo, se desea que dicha arquitectura no limite artificialmente el rendimiento del sistema sino que sea compatible con las mejores resoluciones disponibles en la actualidad, y en particular que sus prestaciones superen a las del DAQ instalado previamente. En primer lugar, se lleva a cabo un estudio general de las arquitecturas de DAQ para setups experimentales para PET y otras aplicaciones de física de altas energías. Por un lado, se determina que las características deseadas implican la digitalización temprana de las señales del detector, la comunicación exclusivamente digital entre módulos, y la ausencia de trigger centralizado. Por otro lado, se hace patente la necesidad de un esquema de sincronización distribuida muy preciso entre módulos, con errores del orden de 100 ps, que opere directamente sobre los enlaces de datos. Un estudio de los métodos ya existentes revela sus graves limitaciones a la hora de alcanzar esas precisiones. Con el fin de paliarlos, se lleva a cabo un análisis teórico de la situación y se propone un nuevo algoritmo de sincronización que es capaz de alcanzar la resolución deseada y elimina las restricciones de alineamiento de reloj impuestas por casi todos los esquemas usuales. Dado que la medida de desfase entre relojes juega un papel crucial en el algoritmo propuesto, se definen y analizan extensiones a los métodos ya existentes que suponen una mejora sustancial. El esquema de sincronismo propuesto se valida utilizando placas de evaluación comerciales. Partiendo del método de sincronismo propuesto, se define una arquitectura de DAQ para PET compuesta de dos tipos de módulos (adquisición y concentración) cuya replicación permite construir un sistema jerárquico de tamaño arbitrario, y se diseñan e implementan placas de circuito basadas en dicha arquitectura para el caso particular de dos detectores. El DAQ así construído se instala finalmente en el setup experimental, donde se caracterizan tanto sus propiedades de sincronización como su resolución como sistema PET y se comprueba que sus prestaciones son superiores a las del sistema previo.
[CAT] La Tomografia per Emissió de Positrons (PET) és una modalitat d'imatge mèdica nuclear no invasiva que permet observar la distribució de substàncies metabòliques a l'interior del cos d'un pacient després d'haver-les marcat amb isòtops radioactius disposant un escàner anular al seu voltant per a detectar la seua desintegració. Aquesta tècnica troba les seues principals aplicacions a la detecció i seguiment de tumors a pacients amb càncer i als estudis metabòlics en animals petits. El grup d'investigació Electronic Design for Nuclear Applications (EDNA) de l'Instituto de Instrumentación para Imagen Molecular (I3M) ha estat involucrat en l'estudi de sistemes PET d'alt rendiment i manté un petit setup experimental amb dos mòduls detectors. Aquesta tesi neix de la necessitat de desenvolupar un nou sistema d'adquisició de dades (DAQ) per al setup esmentat que corregisca els inconvenients de l'anterior. En particular, l'objectiu és definir una arquitectura de DAQ que sigui totalment escalable, modular, i que asseguri la mobilitat i la possibilitat de reutilització dels seus components, de tal manera que admeta qualsevol ampliació o alteració del setup i pugui exportar-se directament a aquells d'altres grups o experiments. Al mateix temps, es desitja que aquesta arquitectura no introduisca límits artificials al rendiment del sistema sinó que sigui compatible amb les millors resolucions disponibles a l'actualitat, i en particular que les seues prestacions siguin superiors a les del DAQ instal.lat amb anterioritat. En primer lloc, es porta a terme un estudi general de les arquitectures de DAQ per a setups experimentals per a PET i altres aplicacions de física d'altes energies. Per una banda, s'arriba a la conclusió que les característiques desitjades impliquen la digitalització dels senyals del detector el més aviat possible, la comunicació exclusivament digital entre mòduls, i l'absència de trigger centralitzat. D'altra banda, es fa palesa la necessitat d'un mecanisme de sincronització distribuïda molt precís entre mòduls, amb errors de l'ordre de 100 ps, que treballi directament sobre els enllaços de dades. Un estudi dels mètodes ja existents revela les seues greus limitacions a l'hora d'assolir aquest nivell de precisió. Amb l'objectiu de pal.liar-les, es duu a terme una anàlisi teòrica de la situació i es proposa un nou algoritme de sincronització que és capaç d'obtindre la resolució desitjada i es desfà de les restriccions d'alineament de rellotges imposades per gairebé tots els esquemes usuals. Atès que la mesura del desfasament entre rellotges juga un paper cabdal a l'algoritme proposat, es defineixen i analitzen extensions als mètodes ja existents que suposen una millora substancial. L'esquema de sincronisme proposat es valida mitjançant plaques d'avaluació comercials. Prenent el mètode proposat com a punt de partida, es defineix una arquitectura de DAQ per a PET composta de dos tipus de mòduls (d'adquisició i de concentració) tals que la replicació d'aquests elements permet construir un sistema jeràrquic de mida arbitrària, i es dissenyen i implementen plaques de circuit basades en aquesta arquitectura per al cas particular de dos detectors. L'electrònica desenvolupada s'instal.la finalment al setup experimental, on es caracteritzen tant les seues propietats de sincronització com la seua resolució com a sistema PET i es comprova que les seues prestacions són superiors a les del sistema previ.
Aliaga Varea, RJ. (2016). Development of a data acquisition architecture with distributed synchronization for a Positron Emission Tomography system with integrated front-end [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/63271
TESIS
Premiado
Styles APA, Harvard, Vancouver, ISO, etc.
39

Halliday, Robert Paul. « Electronics and Timing for the AugerPrime Upgrade and Correlation of Starburst Galaxies with Arrival Directions of Ultra High Energy Cosmic Rays ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1553599216169462.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Heim, Timon [Verfasser]. « Performance of the Insertable B-Layer for the ATLAS Pixel Detector during Quality Assurance and a Novel Pixel Detector Readout Concept based on PCIe / Timon Heim ». Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1120027438/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

TIBURZI, CATERINA. « Multidisciplinary studies of pulsar data and applications to Pulsar Timinig Arrays ». Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266796.

Texte intégral
Résumé :
Pulsars are fast-rotating, highly-magnetized neutron stars, visible at radio wavelenghts as pulsating obje ts thanks to two beams of emission that are fo ussed on the magneti poles and o-rotate with the star, the rotational axis of whi h is not aligned with the magneti one. The rotational stability and the possibility of measuring the time-of-arrival of the "pulses" of emission with an extreme pre ision allows to onstrain the physi al parameters of these sour es, and to undertake a wide number of studies. In this PhD Thesis, we exploit this hara teristi to explore several aspe ts of pulsar physi s, mainly related with the "Pulsar Timing Array" experiments. The first aspe t is pulsar polarization. Pulsars are among the most polarized obje ts of the radio sky ever known, however, the origin of pulsar polarization and of the "modes" of polarization that hara terize pulsar emission is still obs ure. Here we present a lassi polarimetri study of long-period pulsars dis overed during the High Time Resolution Universe survey and a new approa h to lassify the ombination of the polarized mode, along with a first appli ation to the data. The se ond aspe t dire tly on erns the Pulsar Timing Array experiments, whose main goal is a dire t dete tion of gravitational waves using pulsars. So far, no dete tion has been laimed. However, given the in reasing sensitivity of these experiments, it is extremely important to develop solid sanity he ks on the data to state if a future dete tion is genuine or not. We present here a study about false dete tion indu ed by orrelated signals in Pulsar Timing Array experiments, along with a sample of possible routines to mitigate these effe ts. The third aspe t is the long term de adal stability of millise ond pulsar template profiles in flux, that is one of the hypothesis of the pro edures to obtain extreme pre isions in measuring the pulsar parameters. We study 10 millise ond pulsars using the longest and most uniform data set in the world. We also present the surprising result that one of the sour es in our sample seems to present a sistemati profile variation along the years overed by the data set.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Güvenç, Ä°smail. « Towards practical design of impulse radio ultrawideband systems : Parameter estimation and adaptation, interference mitigation, and performance analysis ». Scholar Commons, 2006. http://scholarcommons.usf.edu/etd/2541.

Texte intégral
Résumé :
Ultrawideband (UWB) is one of the promising technologies for future short-range high data rate communications (e.g. for wireless personal area networks) and longer range low data rate communications (e.g. wireless sensor networks).Despite its various advantages and potentials (e.g. low-cost circuitry, unlicensed reuse of licensed spectrum, precision ranging capability etc.), UWB also has its own challenges. The goal of this dissertation is to identify and address some of those challenges, and provide a framework for practical UWB transceiver design.In this dissertation, various modulation options for UWB systems are reviewed in terms of their bit error rate (BER) performances, spectral characteristics, modem and hardware complexities, and data rates. Time hopping (TH) code designs for both synchronous (introduced an adaptive code assignment technique) and asynchronous UWB impulse radio (IR) systems are studied. An adaptive assignment of two different multiple access parame ters (number of pulses per symbol and number of pulse positions per frame)is investigated again considering both synchronous and asynchronous scenarios, and a mathematical framework is developed using Gaussian approximations of interference statistics for different scenarios. Channel estimation algorithms for multiuser UWB communication systems using symbol-spaced (proposed a technique that decreases the training size), frame-spaced (proposed a pulse-discarding algorithm for enhanced estimationperformance), and chip-spaced (using least squares (LS) estimation) sampling are analyzed.A comprehensive review on multiple accessing andinterference avoidance/cancellation for IR-UWB systems is presented.BER performances of different UWB modulation schemes in the presence of timing jitter are evaluated and compared in static and multipath fading channels, and finger estimation error, effects of jitter distribution, and effects of pulse shape are investigated. A unified performance analysis app roach for different IR-UWB transceiver types (stored-reference, transmitted-reference, and energy detector) employing various modulation options and operating at sub-Nyquist sampling rates is presented. The time-of-arrival (TOA) estimation performance of different searchback schemesunder optimal and suboptimal threshold settings are analyzed both for additive white Gaussian noise (AWGN) and multipath channels.
Styles APA, Harvard, Vancouver, ISO, etc.
43

STRINGHINI, GIANLUCA. « Development of an innovative PET module ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241249.

Texte intégral
Résumé :
Lo scopo di questo progetto ́e lo sviluppo di un rivelatore PET innovativo in grado di raggiungere alte prestazioni per quanto riguarda risoluzione spaziale, energetica e temporale mantenendo la complessit ́a del sistema ragionevolmente bassa. Il modulo PET proposto si basa su una matrice di cristalli scintillanti e un rivelatore (MPPC). Se paragonato ad una configurazione a doppia lettura in cui la luce di scintillazione ́e raccolta da entrambe le estremit ́a del modulo, nella configurazione proposta, un rivelatore ́e sostituito da una guida di luce e un ma- teriale riflettente. La luce raccolta dal secondo rivelatore nella configurazione a doppia lettura ́e invece ricircolata e collezionata dai rivelatori vicini grazie alla guida di luce posizionata sul modulo. Sfruttando questo meccanismo di condivisione della luce, ́e possibile raggiungere le stesse prestazioni di una con- figurazione a doppia lettura diminuendo il numero di canali richiesti. E ́ inoltre possibile adottare un accoppiamento tra cristalli e rivelatori maggiore di uno ri- ducendo ulteriormente i canali di lettura. Studiando infatti la distribuzione della luce condivisa ́e possibile identificare il cristallo in cui ́e avvenuta l’interazione del raggio gamma. Numerose matrici sono state testate con differenti configu- razioni di accoppiamento tra cristalli e rivelatore (uno a uno, quattro a uno e nove a uno) e i risultati hanno confermato un ottima capacit ́a di identificazione del cristallo colpito e una risoluzione energetica di 12 % FWHM (configura- zione uno a uno e quattro a uno) e 16% FWHM (configurazione nove a uno). Per quanto riguarda macchine PET con piccolo campo di vista, ́e presente un peggioramento della risoluzione spaziale vicino agli estremi del campo di vista stesso dovuto all’errore di parallasse. Questo ́e mitigato dalla conoscenza della profondit ́a di interazione del raggio gamma (informazione DOI). La possibilit ́a di ottenere questa informazione nel modulo proposto ́e stata testata e una riso- luzione DOI di 3mm FWHM misurata per le configurazioni uno a uno e quattro a uno, mentre per la configurazione nove a uno la risoluzione DOI ́e risultata essere 4 mm FWHM. Al fine di ottenere l’informazione sulla profondit ́a di in- terazione ́e stato necessario introdurre un effetto di attenuazione di luce lungo l’asse del cristallo con l’effetto per ́o di peggiorare le prestazioni temporali del modulo. Un metodo per includere il DOI nella misura della risoluzione tem- porale ́e stato quindi sviluppato. Il modulo proposto presenta una risoluzione temporale di 353 ps FWHM. La seconda parte del progetto ́e lo sviluppo di un algoritmo di ricostruzione di immagini metaboliche capace di includere sia l’informazione temporale che il DOI. L’algoritmo ́e presentato e uno studio di simulazioni condotto al fine di confermare i vantaggi sia dell’informazione DOI che della risoluzione temporale nella qualit ́a delle immagini ricostruite. Per fini- re, uno studio completo sulla PET con informazioni del tempo di volo (TOF) ́e stato effettuato valutando il miglioramento del rapporto segnale-rumore e della risoluzione spaziale in funzione delle prestazioni temporali del sistema.
Positron emission tomography (PET) is a technique based on the detection of two back to back 511 keV gamma rays originated by a positron-electron annihilation. The aim of this project is to develop an innovative PET detector module with high performances, in term of spatial, energy and timing resolution while maintaining the overall complexity reasonably low. The proposed module is based on a pixellated LYSO matrix and MPPC detector. Compared to double side readout configuration in which the scintillators are read on both sides by detectors, one detector is replaced by an optical light guide and a reflector. The light collected by the second detector in the double side readout approach is instead recirculated and collected by the nearby detectors thanks to the light guide on top of the module. Enabling this light sharing mechanism allows rea- ching the same performances of a double side readout configuration decreasing the number of detector channels needed. Furthermore, it is possible to adopt a more than one to one coupling between scintillators and detector in order to further decrease the number of channels needed and to improve the spatial re- solution of the system. Studying the shared light distribution allows identifying the crystal in which the gamma ray interaction took place. Several matrices are tested with different coupling between scintillators and detectors (one to one, four to one and nine to one) and the results show good crystals identification capabilities and an energy resolution in the order of 12% FWHM (one to one and four to one configurations) and 16% FWHM (nine o one configuration). For small animal and organ dedicated PET devices, there is a spatial resolution degradation close to the edges of the field of view (FOV) due to parallax error. This effect is mitigated by knowing the interaction position along the crystal main axis and this information is known as depth of interaction (DOI). The DOI capabilities of the proposed module are tested and a value of 3 mm FWHM DOI resolution is reached for the one to one and four to one coupling configuration. For the nine to one configuration a DOI resolution of 4 mm FWHM is obtained. In order to reach the DOI information, an attenuation behavior over the crystal length is introduced but, as drawback, it decreases the timing performances of the proposed module. A method to reduce this effect is presented by including the DOI information in the evaluation of the timing resolution. The module shows a coincidence time resolution of 353 ps FWHM. The second part of this project is focused on the development of an image reconstruction software able to include both the DOI and timing information. The reconstruction algorithm is described and presented and a simulation study is performed in order to confirm the benefits of the DOI and timing in the image quality. Furthermore, a complete time of flight (TOF) study is performed evaluating the improvement of the signal to noise (SNR) ratio and spatial resolution as a function of the timing performances.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Grabas, Hervé. « Développement d'un système de mesure de temps de vol picoseconde dans l'expérience ATLAS ». Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00982076.

Texte intégral
Résumé :
Cette thèse présente une étude de la sensibilité à la physique au delà du modèle standard et en particulier aux couplages anormaux entre les photons et les bosons W. Ceci est réalisé en détectant dans ATLAS les protons intacts après interaction et en mesurant leur temps de vol avec une précision de quelques pico-secondes de part et d'autre du détecteur central.Je décrirai également les photo-détecteurs de grande superficie avec une précision de quelques pico-secondes et les algorithmes de reconstruction de temps basés sur l'échantillonnage rapide du signal. Le circuit intégré spécifique SamPic pour une mesure de temps de très haute précision sera enfin présenté ainsi que les premiers résultats de mesure avec ce circuit. Ils montrent en particulier une précision exceptionnelle, meilleure que 5 ps, sur la mesure de temps entre deux impulsions.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Hueso, González Fernando. « Nuclear methods for real-time range verification in proton therapy based on prompt gamma-ray imaging ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-204988.

Texte intégral
Résumé :
Accelerated protons are excellent candidates for treating several types of tumours. Such charged particles stop at a defined depth, where their ionisation density is maximum. As the dose deposit beyond this distal edge is very low, proton therapy minimises the damage to normal tissue compared to photon therapy. Nonetheless, inherent range uncertainties cast doubts on the irradiation of tumours close to organs at risk and lead to the application of conservative safety margins. This constrains significantly the potential benefits of proton over photon therapy and limits its ultimate aspirations. Prompt gamma rays, a by-product of the irradiation that is correlated to the dose deposition, are reliable signatures for the detection of range deviations and even for three-dimensional in vivo dosimetry. In this work, two methods for Prompt Gamma-ray Imaging (PGI) are investigated: the Compton camera (Cc) and the Prompt Gamma-ray Timing (PGT). Their applicability in a clinical scenario is discussed and compared. The first method aspires to reconstruct the prompt gamma ray emission density map based on an iterative imaging algorithm and multiple position sensitive gamma ray detectors. These are arranged in scatterer and absorber plane. The second method has been recently proposed as an alternative to collimated PGI systems and relies on timing spectroscopy with a single monolithic detector. The detection times of prompt gamma rays encode essential information about the depth-dose profile as a consequence of the measurable transit time of ions through matter. At Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and OncoRay, detector components are characterised in realistic radiation environments as a step towards a clinical Cc. Conventional block detectors deployed in commercial Positron Emission Tomography (PET) scanners, made of Cerium-doped lutetium oxyorthosilicate - Lu2SiO5:Ce (LSO) or Bismuth Germanium Oxide - Bi4Ge3O12 (BGO) scintillators, are suitable candidates for the absorber of a Cc due to their high density and absorption efficiency with respect to the prompt gamma ray energy range (several MeV). LSO and BGO block detectors are compared experimentally in clinically relevant radiation fields in terms of energy, spatial and time resolution. On a different note, two BGO block detectors (from PET scanners), arranged as the BGO block Compton camera (BbCc), are deployed for simple imaging tests with high energy prompt gamma rays produced in homogeneous Plexiglas targets by a proton pencil beam. The rationale is to maximise the detection efficiency in the scatterer plane despite a moderate energy resolution. Target shifts, increase of the target thickness and beam energy variation experiments are conducted. Concerning the PGT concept, in a collaboration among OncoRay, HZDR and IBA, the first test at a clinical proton accelerator (Westdeutsches Protonentherapiezentrum Essen) with several detectors and heterogeneous phantoms is performed. The sensitivity of the method to range shifts is investigated, the robustness against background and stability of the beam bunch time profile is explored, and the bunch time spread is characterised for different proton energies. With respect to the material choice for the absorber of the Cc, the BGO scintillator closes the gap with respect to the brighter LSO. The reason behind is the high energies of prompt gamma rays compared to the PET scenario, which increase significantly the energy, spatial and time resolution of BGO. Regarding the BbCc, shifts of a point-like radioactive source are correctly detected, line sources are reconstructed, and one centimetre proton range deviations are identified based on the evident changes of the back projection images. Concerning the PGT experiments, for clinically relevant doses, range differences of five millimetres in defined heterogeneous targets are identified by numerical comparison of the spectrum shape. For higher statistics, range shifts down to two millimetres are detectable. Experimental data are well reproduced by analytical modelling. The Cc and the PGT are ambitious approaches for range verification in proton therapy based on PGI. Intensive detector characterisation and tests in clinical facilities are mandatory for developing robust prototypes, since the energy range of prompt gamma rays spans over the MeV region, not used traditionally in medical applications. Regarding the material choice for the Cc: notwithstanding the overall superiority of LSO, BGO catches up in the field of PGI. It can be considered as a competitive alternative to LSO for the absorber plane due to its lower price, higher photoabsorption efficiency, and the lack of intrinsic radioactivity. The results concerning the BbCc, obtained with relatively simple means, highlight the potential application of Compton cameras for high energy prompt gamma ray imaging. Nevertheless, technical constraints like the low statistics collected per pencil beam spot (if clinical currents are used) question their applicability as a real-time and in vivo range verification method in proton therapy. The PGT is an alternative approach, which may have faster translation into clinical practice due to its lower price and higher efficiency. A proton bunch monitor, higher detector throughput and quantitative range retrieval are the upcoming steps towards a clinically applicable prototype, that may detect significant range deviations for the strongest beam spots. The experimental results emphasise the prospects of this straightforward verification method at a clinical pencil beam and settle this novel approach as a promising alternative in the field of in vivo dosimetry
Beschleunigte Protonen sind ausgezeichnete Kandidaten für die Behandlung von diversen Tumorarten. Diese geladenen Teilchen stoppen in einer bestimmten Tiefe, bei der die Ionisierungsdichte maximal ist. Da die deponierte Dosis hinter der distalen Kante sehr klein ist, minimiert die Protonentherapie den Schaden an normalem Gewebe verglichen mit der Photonentherapie. Inhärente Reichweitenunsicherheiten stellen jedoch die Bestrahlung von Tumoren in der Nähe von Risikoorganen in Frage und führen zur Anwendung von konservativen Sicherheitssäumen. Dadurch werden die potentiellen Vorteile der Protonen- gegenüber der Photonentherapie sowie ihre letzten Ziele eingeschränkt. Prompte Gammastrahlung, ein Nebenprodukt der Bestrahlung, welche mit der Dosisdeposition korreliert, ist eine zuverlässige Signatur um Reichweitenunterschiede zu detektieren und könnte sogar für eine dreidimensionale in vivo Dosimetrie genutzt werden. In dieser Arbeit werden zwei Methoden für Prompt Gamma-ray Imaging (PGI) erforscht: die Compton-Kamera (CK) und das Prompt Gamma-ray Timing (PGT)-Konzept. Des Weiteren soll deren Anwendbarkeit im klinischen Szenario diskutiert und verglichen werden. Die erste Methode strebt nach der Rekonstruktion der Emissionsdichtenverteilung der prompten Gammastrahlung und basiert auf einem iterativen Bildgebungsalgorithmus sowie auf mehreren positionsempfindlichen Detektoren. Diese werden in eine Streuer- und Absorberebene eingeteilt. Die zweite Methode ist vor Kurzem als eine Alternative zu kollimierten PGI Systemen vorgeschlagen worden, und beruht auf dem Prinzip der Zeitspektroskopie mit einem einzelnen monolithischen Detektor. Die Detektionszeiten der prompten Gammastrahlen beinhalten entscheidende Informationen über das Tiefendosisprofil aufgrund der messbaren Durchgangszeit von Ionen durch Materie. Am Helmholtz-Zentrum Dresden-Rossendorf (HZDR) und OncoRay werden Detektorkomponenten in realistischen Strahlungsumgebungen als ein Schritt zur klinischen CK charakterisiert. Konventionelle Blockdetektoren, welche in kommerziellen Positronen-Emissions-Tomographie (PET)-Scannern zum Einsatz kommen und auf Cer dotiertem Lutetiumoxyorthosilikat - Lu2SiO5:Ce (LSO) oder Bismutgermanat - Bi4Ge3O12 (BGO) Szintillatoren basieren, sind geeignete Kandidaten für den Absorber einer CK wegen der hohen Dichte und Absorptionseffizienz im Energiebereich von prompten Gammastrahlen (mehrere MeV). LSO- und BGO-Blockdetektoren werden in klinisch relevanten Strahlungsfeldern in Bezug auf Energie-, Orts- und Zeitauflösung verglichen. Weiterhin werden zwei BGO-Blockdetektoren (von PET-Scannern), angeordnet als BGO Block Compton-Kamera (BBCK), benutzt, um die Bildgebung von hochenergetischen prompten Gammastrahlen zu untersuchen, die in homogenen Plexiglas-Targets durch einen Protonen-Bleistiftstrahl emittiert werden. Die Motivation hierfür ist, die Detektionseffizienz der Streuerebene zu maximieren, wobei jedoch die Energieauflösung vernachlässigt wird. Targetverschiebungen, sowie Änderungen der Targetdicke und der Teilchenenergie werden untersucht. In einer Kollaboration zwischen OncoRay, HZDR and IBA, wird der erste Test des PGT-Konzepts an einem klinischen Protonenbeschleuniger (Westdeutsches Protonentherapiezentrum Essen) mit mehreren Detektoren und heterogenen Phantomen durchgeführt. Die Sensitivität der Methode hinsichtlich Reichweitenveränderungen wird erforscht. Des Weiteren wird der Einfluss von Untergrund und Stabilität des Zeitprofils des Strahlenbündels untersucht, sowie die Zeitverschmierung des Bündels für verschiedene Protonenenergien charakterisiert. Für die Materialauswahl für den Absorber der CK ergibt sich, dass sich BGO dem lichtstärkeren LSO Szintillator angleicht. Der Grund dafür sind die höheren Energien der prompten Gammastrahlung im Vergleich zum PET Szenario, welche die Energie-, Orts- und Zeitauflösung von BGO stark verbessern. Anhand von offensichtlichen Änderungen der Rückprojektionsbilder zeigt sich, dass mit der BBCK Verschiebungen einer punktförmigen radioaktiven Quelle erfolgreich detektiert, Linienquellen rekonstruiert und Verschiebungen der Protonenreichweite um einen Zentimeter identifiziert werden. Für die PGT-Experimente können mit einem einzigen Detektor Reichweitenunterschiede von fünf Millimetern für definierte heterogene Targets bei klinisch relevanten Dosen detektiert werden. Dies wird durch den numerischen Vergleich der Spektrumform ermöglicht. Bei größerer Ereigniszahl können Reichweitenunterschiede von bis zu zwei Millimetern detektiert werden. Die experimentellen Daten werden durch analytische Modellierung wiedergegeben. Die CK und das PGT-Konzept sind ambitionierte Ansätze zur Verifizierung der Reichweite in der Protonentherapie basierend auf PGI. Intensive Detektorcharakterisierung und Tests an klinischen Einrichtungen sind Pflicht für die Entwicklung geeigneter Prototypen, da der Energiebereich prompter Gammastrahlung sich über mehrere MeV erstreckt, was nicht dem Normbereich der traditionellen medizinischen Anwendungen entspricht. Im Bezug auf die Materialauswahl der CK wird ersichtlich, dass BGO trotz der allgemeinen Überlegenheit von LSO für die Anwendung im Bereich PGI aufholt. Wegen des niedrigeren Preises, der höheren Photoabsorptionseffizienz und der nicht vorhandenen Eigenaktivität erscheint BGO als eine konkurrenzfähige Alternative für die Absorberebene der CK im Vergleich zu LSO. Die Ergebnisse der BBCK, welche mit relativ einfachen Mitteln gewonnen werden, heben die potentielle Anwendung von Compton-Kameras für die Bildgebung prompter hochenergetischer Gammastrahlen hervor. Trotzdem stellen technische Beschränkungen wie die mangelnde Anzahl von Messereignissen pro Bestrahlungspunkt (falls klinische Ströme genutzt werden) die Anwendbarkeit der CK als Echtzeit- und in vivo Reichweitenverifikationsmethode in der Protonentherapie in Frage. Die PGT-Methode ist ein alternativer Ansatz, welcher aufgrund der geringeren Kosten und der höheren Effizienz eine schnellere Umsetzung in die klinische Praxis haben könnte. Ein Protonenbunchmonitor, höherer Detektordurchsatz und eine quantitative Reichweitenrekonstruktion sind die weiteren Schritte in Richtung eines klinisch anwendbaren Prototyps, der signifikante Reichweitenunterschiede für die stärksten Bestrahlungspunkte detektieren könnte. Die experimentellen Ergebnisse unterstreichen das Potential dieser Reichweitenverifikationsmethode an einem klinischen Bleistiftstrahl und lassen diesen neuartigen Ansatz als eine vielversprechende Alternative auf dem Gebiet der in vivo Dosimetrie erscheinen
Styles APA, Harvard, Vancouver, ISO, etc.
46

Gouzien, Élie. « Optique quantique multimode pour le traitement de l'information quantique ». Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4110.

Texte intégral
Résumé :
Cette thèse étudie l’optique quantique multimode, aussi bien du point de vue de la génération que celui de la détection. Elle s’articule autour de trois volets. Nous étudions la génération de lumière comprimée multimode dans une cavité. Pour cela nous considérons la forme la plus générale de hamiltonien quadratique, permettant entre autres de décrire l’utilisation de plusieurs pompes dans un matériau effectuant du mélange à quatre ondes. Une approche combinant fonctions de Green et décompositions de matrices symplectiques est décrite. Cette théorie générique est appliquée à des cas particuliers. Dans un premier temps, des exemples en basse dimension sont donnés. Ensuite, une configuration d’oscillateur paramétrique optique pompé de manière synchrone (SPOPO) est décrite et étudiée ; les résultats obtenus montrent que ce système a un comportement très différent de celui du SPOPO utilisant une non-linéarité d’ordre 2. Ces travaux ouvrent la voie à la réalisation de peignes de fréquences quantiques avec des micro-résonateurs en anneau gravés sur silicium. Un autre problème examiné est celui de prendre en compte l’information temporelle obtenue lors du clic d’un détecteur de photon unique. Pour cela nous utilisons un formalisme multimodal temporel afin d’expliciter les opérateurs décrivant la mesure. Les principaux défauts des détecteurs réels, dont la gigue temporelle, l’efficacité finie et les coups d’obscurité sont pris en compte. L’utilisation des opérateurs est illustrée par la description d’expériences usuelles de l’optique quantique. Enfin, on montre que la lecture du temps de clic du détecteur permet d’améliorer la qualité de l’état généré par une source de photons annoncés. En troisième partie nous présentons un schéma de génération d’intrication hybride entre variables continues et discrètes, pour laquelle la partie discrète est encodée temporellement. Ce schéma est analysé en détail vis-à-vis de sa résistance aux imperfections expérimentales
This thesis studies multimode quantum optics, from generation to detection of light. It focuses on three main parts. Multimode squeezed states generation within cavity is studied. More specifically, we take into account general quadratic Hamiltonian, which allows describing experiments involving arbitrary number of modes and pumps within a medium performing four-wave mixing. We describe a generic approach combining Green functions and symplectic matrix decomposition. This general theory is illustrated on specific cases. First, low-dimensional examples are given. Then, a synchronously pumped optical parametric oscillator (SPOPO) is described and studied; it shows a very distinct behavior from that of the SPOPO using second order non-linearity. This work opens way to the realization of quantum frequency combs with ring micro-resonators engraved on silicon. Single-photon detectors are described taking into account temporal degrees of freedom. We give positive-valued measurement operators describing such detectors including realistic imperfections such as timing-jitter, finite efficiency and dark counts. Use of those operators is illustrated on common quantum optics experiments. Finally, we show how time-resolved measurement allows improving the quality of state generated by single-photon heralded source. In the third part we propose a protocol for generating a hybrid state entangling continuous and discrete variables parts, for which the discrete part is time-bin encoded. This scheme is aanalysed in detail with respect to its resilience to experimental imperfections
Styles APA, Harvard, Vancouver, ISO, etc.
47

Petrignani, Savino. « High Timing Resolution Front-end Circuit for Silicon Photomultiplier Detectors ». Doctoral thesis, 2021. http://hdl.handle.net/11589/226498.

Texte intégral
Résumé :
Negli ultimi decenni, i miglioramenti tecnologici nei processi planari per la produzione di circuiti integrati CMOS hanno contribuito allo sviluppo di nuovi dispositivi allo stato solido ad elevate prestazioni, tra i quali vale la pena menzionare i fotomoltiplicatori al silicio. La peculiarità di questi sensori, meglio conosciuti con l’acronimo SiPM, riguarda l’amplificazione intrinseca che essi presentano quando vengono fatti funzionare in modalità Geiger; infatti, quando la tensione di polarizzazione è superiore alla tensione di breakdown del dispositivo, i SiPM sono in grado di generare un segnale in corrente con un fronte di salita molto ripido, anche in corrispondenza dell'incidenza di un singolo fotone sulla regione attiva del dispositivo. Inoltre, dati i loro costi contenuti, la loro solidità e l’insensibilità ai campi magnetici, i SiPM rappresentano una valida alternativa ai più affermati tubi fotomoltiplicatori (PMT). Per questo motivo, l’utilizzo di questi dispositivi sta prendendo piede in numerosi campi di applicazione, specialmente in quelli in cui è importante rilevare bassi livelli di luce con alte risoluzioni temporali. Questo è il caso della Tomografia ad Emissione di Positroni (PET), una tecnica adoperata in medicina nucleare per la diagnosi di alcune patologie, dove i fotomoltiplicatori sono impiegati per rilevare i fotoni gamma emessi da sostanze radiomarcate iniettate nel corpo del paziente. Durante questo corso di dottorato, è stato sviluppato un nuovo circuito di front-end per fotomoltiplicatori al silicio in tecnologia CMOS 130nm. Questo progetto è stato condotto in collaborazione con il gruppo di circuiti integrati di SLAC National Accelerator Laboratory, con sede a Menlo Park, California, con lo scopo di realizzare un canale analogico per sistemi PET risoluzione temporale allo stato dell’arte. Difatti, questo circuito di lettura è in grado di misurare non solo l’energia dell’evento, ma anche l’istante di tempo in cui il fotone viene assorbito dal sensore con una risoluzione temporale di poche decine di picosecondi, conforme con le specifiche di progetto. Conseguentemente è stato sviluppato un circuito integrato multicanale allo scopo di testare il canale di lettura analogico, implementando tutti i blocchi necessari per la conversione, l’analisi e la trasmissione dei dati.
Over the last few decades, the technological advancement in planar processes for the production of CMOS integrated circuits has also enabled the development of new high performance solid-state detectors, among which it is worth mentioning silicon photomultipliers. The distinctive feature of such sensors, also known with the acronym SiPMs, is the intrinsic amplification that they exhibit when operating in Geiger mode; in this condition these devices are able to generate a fast current signal with an adequate amplitude even in response to the detection of a single impinging photon. Moreover, given their low cost, robustness and insensitivity to magnetic fields, SiPMs represent a valid alternative to the most established Photomultiplier Tubes (PMTs). Therefore, the application of these detectors is taking hold in a number of fields, especially where low light levels and fine time resolutions are concerned. This is the case of the Positron Emission Tomography (PET), a medical imaging technique aimed at diagnosing specific diseases, where photomultipliers are employed to detect the gamma-ray photons emitted by the radiotracer injected into the patient’s body. During this doctoral program, a new front-end circuit for silicon photomultipliers has been designed in a standard 130 nm CMOS technology. This project has been carried out in collaboration with the IC group of SLAC National Accelerator Laboratory based in Menlo Park, California, with the aim of developing an analog channel for PET systems with groundbreaking performances in terms of temporal resolution. Indeed, this electronic circuit is able to provide not only the energy of the detected event, but also the occurrence time of the photon absorption with a time resolution of just few tens of picoseconds, compliant with the design specifications. Subsequently, a multichannel Application Specific Integrated Circuit (ASIC) has been developed with the purpose of testing the analog front-end, by implementing all the circuit blocks useful for the conversion, parsing and transmission of the digital data.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Lin, Hong-Chih, et 林泓志. « Evaluation of Area and Energy Overheads of In-Situ Timing Fault Detectors for Variation-Tolerant Datapaths ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/m4swr8.

Texte intégral
Résumé :
碩士
國立中正大學
資訊工程研究所
103
Process, voltage, temperature, data, aging, jitter and many other factors that affect the performance of digital circuit. Variation will be more serious in the advanced technology. The worst-case design will limit performance. The timing speculation variable latency circuit, such as the Razor-I and SL can tolerate variation let its efficacy better than worst-case design. This paper presents the timing speculation variable latency circuit design, design for typical case can achieve performance and tolerate corner-case-based (best case, typical case, worst case) variation. Then speculation variable delay circuit timing have essential overhead, such as Razor-I have short path problem that have to add buffer. SL solve the short path problem, but still have to spend time waiting for the correct values for timing fault detection. We proposed a new timing fault design SL-TD that use transitions detection to perform at higher frequencies. Experimental result of design for typical case, SL compare to Razor-I enhance performance 21.05%, and save energy 35.94% and area 33.03% at 0.88GHz. Finally, the new timing fault detection design SL-TD can compare to SL enhance performance 26.67%, and save energy 42.53% and area 40.87% at 1.11 GHz.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Laso, Garcia Alejandro. « Timing Resistive Plate Chambers with Ceramic Electrodes : for Particle and Nuclear Physics Experiments ». Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28599.

Texte intégral
Résumé :
The focus of this thesis is the development of Resistive Plate Chambers (RPCs) with ceramic electrodes. The use of ceramic composites, Si3N4/SiC, opens the way for the application of RPCs in harsh radiation environments. Future Experiments like the Compressed Baryonic Matter (CBM) at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt will need new RPCs with high rate capabilities and high radiation tolerance. Ceramic composites are specially suited for this purpose due to their resistance to radiation and chemical contamination. The bulk resistivity of these ceramics is in the range 10^7 - 10^13 Ohm cm. The bulk resistivity of the electrodes is the main factor determining the rate capabilities of a RPC, therefore a specifific measuring station and a measurement protocol has been set for these measurements. The dependence of the bulk resistivity on the difffferent steps of the manufacturing process has been studied. Other electrical parameters like the relaxation time, the relative permittivity and the tangent loss have also been investigated. Simulation codes for the investigation of RPC functionality was developed using the gas detectors simulation framework GARFIELD++. The parameters of the two mixtures used in RPC operation have been extracted. Furthermore, theoretical predictions on time resolution and effi ciency have been calculated and compared with experimental results. Two ceramic materials have been used to assemble RPCs. Si3N4/SiC and Al2O3 with a thin (nm thick) chromium layer deposited over it. Several prototypes have been assembled with active areas of 5x 5 cm^2, 10x 10 cm^2 and 20 x20 cm^2. The number of gaps ranges from two to six. The gas gap widths were 250 micro meter and 300 micrometer. As separator material mylar foils, fifishing line and high-resistive ceramics have been used. Different detector architectures have been built and their effffect on RPC performance analysed. The RPCs developed at HZDR and ITEP (Moscow) were systematically tested in electron and proton beams and with cosmic radiation over the course of three years. The performance of the RPCs was extracted from the measured data. The main parameters like time resolution, effi ciency, rate capabilities, cluster size, detector currents and avalanche charge were obtained and compared with other RPC systems in the world. A comparison with phenomenological models was performed.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Golnik, Christian. « Treatment verification in proton therapy based on the detection of prompt gamma-rays ». Doctoral thesis, 2016. https://tud.qucosa.de/id/qucosa%3A30476.

Texte intégral
Résumé :
Background The finite range of a proton beam in tissue and the corresponding steep distal dose gradient near the end of the particle track open new vistas for the delivery of a highly target-conformal dose distribution in radiation therapy. Compared to a classical photon treatment, the potential therapeutic benefit of a particle treatment is a significant dose reduction in the tumor-surrounding tissue at a comparable dose level applied to the tumor. Motivation The actually applied particle range, and therefor the dose deposition in the target volume, is quite sensitive to the tissue composition in the path of the protons. Particle treatments are planned via computed tomography images, acquired prior to the treatment. The conversion from photon stopping power to proton stopping power induces an important source of range-uncertainty. Furthermore, anatomical deviations from planning situation affect the accurate dose deposition. Since there is no clinical routine measurement of the actually applied particle range, treatments are currently planned to be robust in favor of optimal regarding the dose delivery. Robust planning incorporates the application of safety margins around the tumor volume as well as the usage of (potentially) unfavorable field directions. These pretreatment safety procedures aim to secure dose conformality in the tumor volume, however at the price of additional dose to the surrounding tissue. As a result, the unverified particle range constraints the principle benefit of proton therapy. An on-line, in-vivo range-verification would therefore bring the potential of particle therapy much closer to the daily clinical routine. Materials and methods This work contributes to the field of in-vivo treatment verification by the methodical investigation of range assessment via the detection of prompt gamma-rays, a side product emitted due to proton-tissue interaction. In the first part, the concept of measuring the spatial prompt gamma-ray emission profile with a Compton camera is investigated with a prototype system consisting of a CdZnTe cross strip detector as scatter plane and three side-by-side arranged, segmented BGO block detectors as absorber planes. In the second part, the novel method of prompt gamma-ray timing (PGT) is introduced. This technique has been developed in the scope of this work and a patent has been applied for. The necessary physical considerations for PGT are outlined and the feasibility of the method is supported with first proof-of-principle experiments. Results Compton camera: Utilizing a 22-Na source, the feasibility of reconstructing the emission scene of a point source at 1.275 MeV was verified. Suitable filters on the scatter-absorber coincident timing and the respective sum energy were defined and applied to the data. The source position and corresponding source displacements could be verified in the reconstructed Compton images. In a next step, a Compton imaging test at 4.44 MeV photon energy was performed. A suitable test setup was identified at the Tandetron accelerator at the Helmholtz-Zentrum Dresden-Rossendorf, Germany. This measurement setup provided a monoenergetic, point-like source of 4.44 MeV gamma-rays, that was nearly free of background. Here, the absolute gamma-ray yield was determined. The Compton imaging prototype was tested at the Tandetron regarding (i) the energy resolution, timing resolution, and spatial resolution of the individual detectors, (ii) the imaging capabilities of the prototype at 4.44 MeV gamma-ray energy and (iii) the Compton imaging efficiency. In a Compton imaging test, the source position and the corresponding source displacements were verified in the reconstructed Compton images. Furthermore, via the quantitative gamma-ray emission yield, the Compton imaging efficiency at 4.44 MeV photon energy was determined experimentally. PGT: The concept of PGT was developed and introduced to the scientific community in the scope of this thesis. A theoretical model for PGT was developed and outlined. Based on the theoretical considerations, a Monte Carlo (MC) algorithm, capable of simulating PGT distributions was implemented. At the KVI-CART proton beam line in Groningen, The Netherlands, time-resolved prompt gamma-ray spectra were recorded with a small scale, scintillator based detection system. The recorded data were analyzed in the scope of PGT and compared to the measured data, yielding in an excellent agreement and thus verifying the developed theoretical basis. For a hypothetical PGT imaging setup at a therapeutic proton beam it was shown, that the statistical error on the range determination could be reduced to 5 mm at a 90 % confidence level for a single spot of 5x10E8 protons. Conclusion Compton imaging and PGT were investigated as candidates for treatment verification, based on the detection of prompt gamma-rays. The feasibility of Compton imaging at photon energies of several MeV was proven, which supports the approach of imaging high energetic prompt $gamma$-rays. However, the applicability of a Compton camera under therapeutic conditions was found to be questionable, due to (i) the low device detection efficiency and the corresponding limited number of valid events, that can be recorded within a single treatment and utilized for image reconstruction, and (ii) the complexity of the detector setup and attached readout electronics, which make the development of a clinical prototype expensive and time consuming. PGT is based on a simple time-spectroscopic measurement approach. The collimation-less detection principle implies a high detection efficiency compared to the Compton camera. The promising results on the applicability under treatment conditions and the simplicity of the detector setup qualify PGT as method well suited for a fast translation towards a clinical trial.:1. Particle therapy 1.1 Introduction 1.2 The problem of particle range uncertainty 1.3 Currently investigated methods for treatment verification 1.4 Methods for prompt gamma-ray based treatment verification 1.4.1 Prompt gamma-ray imaging (PGI) 1.4.2 Prompt gamma-ray timing (PGT) 2. Physical relations 2.1 Interactions of protons with matter 2.1.1 Stopping of protons 2.1.2 Multiple Coulomb scattering (MCS) 2.1.3 Nonelastic collisions 2.2 Definition of deposited dose and proton range 2.2.1 Definition of dose D 2.2.2 The dose depth Dx , the proton fluence Φ, and the Bragg peak 2.2.3 The particle range 2.3 Production and delivery of proton beams 2.3.1 Acceleration of protons in a isochronous cyclotron 2.3.2 Beam delivery 2.4 Prompt gamma-ray emission 2.4.1 The production of prompt gamma-rays via nonelastic nuclear interactions 2.5 Interactions of photons with matter 2.5.1 Photoelectric absorption 2.5.2 Compton scattering 2.5.3 Pair production 2.5.4 Mass attenuation coefficient μ/ρ 2.6 Detection of photons 2.6.1 Semiconductor detectors 2.6.2 Scintillation detectors 3 Tests of a Compton camera for PGI 3.1 Principle of operation 3.2 Status of preceding work 3.3 Modifications to the existing Compton imaging prototype 3.4 Detectors of the prototype 3.4.1 The CZT scatter plane 3.4.2 The BGO absorber plane 3.4.3 The Compton imaging prototype 3.5 Electronic readout and event generation 3.6 Detector calibration 3.6.1 Calibration of the CZT detector 3.6.2 Calibration of a BGO detector 3.7 Compton imaging at 1.275 MeV photon energy 3.7.1 Imaging setup 3.7.2 Coincident timing 3.7.3 Coincident energy deposition 3.7.4 Image reconstruction 3.8 Compton imaging at 4.44 MeV photon energy 3.8.1 Beam setup at the Tandetron accelerator 3.8.2 Beam tuning at the Tandetron accelerator 3.8.3 The gamma-ray emission yield 3.8.4 Measurement setup 3.8.5 Energy detection 3.8.6 Spatial detection 3.8.7 Coincident timing 3.8.8 Coincident energy deposition 3.8.9 Detection efficiency η 3.8.10 Imaging setup 3.8.11 Image reconstruction 3.9 Implications for a therapeutic Compton imaging scenario 3.10 Summary and discussion 4 Prompt gamma-ray timing (PGT) 4.1 Theoretical description of PGT 4.1.1 Timing of prompt gamma-ray emission 4.1.2 Kinematics of protons 4.1.3 The correlation between spatial and temporal prompt gamma-ray emission in a thick target 4.1.4 Setup for time-resolved measurements of prompt gamma-rays 4.1.5 Uncertainty of the reference time 4.1.6 Standard error of the mean and confidence intervals of statistical momenta 4.1.7 A simplified MC method for the modeling of PGT 4.2 Experimental results 4.2.1 The GAGG detector 4.2.2 Detector energy resolution 4.2.3 Detector time resolution with 60-Co 4.2.4 Energy-resolved detector time resolution - the ELBE experiment 4.2.5 The KVI-CART proton beam line 4.2.6 Time-resolved measurement of prompt gamma-rays 4.2.7 Experimental determination of the system time resolution σ 4.2.8 PGT in dependence of proton transit time 4.3 Towards treatment verification with PGT 4.3.1 MC based PGT in dependence of proton range 4.3.2 MC based PGT at inhomogeneous targets 4.4 Implications for a therapeutic PGT scenario 4.4.1 Range verification for an exemplary PGT setup 4.4.2 Practical restrictions for the therapeutic PGT scenario 4.4.3 Principal limitations of the PGT method 4.5 Summary and outlook 5 Discussion Summary Zusammenfassung Bibliography Acknowledgement
Hintergrund Strahlentherapie ist eine wichtige Modalität der therapeutischen Behandlung von Krebs. Das Ziel dieser Behandlungsform ist die Applikation einer bestimmten Strahlendosis im Tumorvolumen, wobei umliegendes, gesundes Gewebe nach Möglichkeit geschont werden soll. Bei der Bestrahlung mit einem hochenergetischen Protonenstrahl erlaubt die wohldefinierte Reichweite der Teilchen im Gewebe, in Kombination mit dem steilen, distalen Dosisgradienten, eine hohe Tumor-Konformalität der deponierten Dosis. Verglichen mit der klassisch eingesetzten Behandlung mit Photonen ergibt sich für eine optimiert geplante Behandlung mit Protonen ein deutlich reduziertes Dosisnivau im den Tumor umgebenden Gewebe. Motivation Die tatsächlich applizierte Reichweite der Protonen im Körper, und somit auch die lokal deponierte Dosis, ist stark abhängig vom Bremsvermögen der Materie im Strahlengang der Protonen. Bestrahlungspläne werden mit Hilfe eines Computertomographen (CT) erstellt, wobei die CT Bilder vor der eigentlichen Behandlung aufgenommen werden. Ein CT misst allerdings lediglich den linearen Schwächungskoeffizienten für Photonen in der Einheit Hounsfield Units (HU). Die Ungenauigkeit in der Umrechnung von HU in Protonen-Bremsvermögen ist, unter anderem, eine wesentliche Ursache für die Unsicherheit über die tatsächliche Reichweite der Protonen im Körper des Patienten. Derzeit existiert keine routinemäßige Methode, um die applizierte Dosis oder auch die Protonenreichweite in-vivo und in Echtzeit zu bestimmen. Um das geplante Dosisniveau im Tumorvolumen trotz möglicher Reichweiteunterschiede zu gewährleisten, werden die Bestrahlungspläne für Protonen auf Robustheit optimiert, was zum Einen das geplante Dosisniveau im Tumorvolumen trotz auftretender Reichweiteveränderungen sicherstellen soll, zum Anderen aber auf Kosten der möglichen Dosiseinsparung im gesunden Gewebe geht. Zusammengefasst kann der Hauptvorteil einer Therapie mit Protonen wegen der Unsicherheit über die tatsächlich applizierte Reichweite nicht wirklich realisiert. Eine Methode zur Bestimmung der Reichweite in-vivo und in Echtzeit wäre daher von großem Nutzen, um das theoretische Potential der Protonentherapie auch in der praktisch ausschöpfen zu können. Material und Methoden In dieser Arbeit werden zwei Konzepte zur Messung prompter Gamma-Strahlung behandelt, welche potentiell zur Bestimmung der Reichweite der Protonen im Körper eingesetzt werden können. Prompte Gamma-Strahlung entsteht durch Proton-Atomkern-Kollision auf einer Zeitskala unterhalb von Picosekunden entlang des Strahlweges der Protonen im Gewebe. Aufgrund der prompten Emission ist diese Form der Sekundärstrahlung ein aussichtsreicher Kandidat für eine Bestrahlungs-Verifikation in Echtzeit. Zum Einen wird die Anwendbarkeit von Compton-Kameras anhand eines Prototyps untersucht. Dabei zielt die Messung auf die Rekonstruktion des örtlichen Emissionsprofils der prompten Gammas ab. Zum Zweiten wird eine, im Rahmen dieser Arbeit neu entwickelte Messmethode, das Prompt Gamma-Ray Timing (PGT), vorgestellt und international zum Patent angemeldet. Im Gegensatz zu bereits bekannten Ansätzen, verwendet PGT die endliche Flugzeit der Protonen durch das Gewebe und bestimmt zeitliche Emissionsprofile der prompten Gammas. Ergebnisse Compton Kamera: Die örtliche Emissionsverteilung einer punktförmigen 22-Na Quelle wurde wurde bei einer Photonenenergie von 1.275 MeV nachgewiesen. Dabei konnten sowohl die absolute Quellposition als auch laterale Verschiebungen der Quelle rekonstruiert werden. Da prompte Gamma-Strahlung Emissionsenergien von einigen MeV aufweist, wurde als nächster Schritt ein Bildrekonstruktionstest bei 4.44 MeV durchgeführt. Ein geeignetes Testsetup wurde am Tandetron Beschleuniger am Helmholtz-Zentrum Dresden-Rossendorf, Deutschland, identifiziert, wo eine monoenergetische, punktförmige Emissionverteilung von 4.44 MeV Photonen erzeugt werden konnte. Für die Detektoren des Prototyps wurden zum Einen die örtliche und zeitliche Auflösung sowie die Energieauflösungen untersucht. Zum Anderen wurde die Emissionsverteilung der erzeugten 4.44 MeV Quelle rekonstruiert und die zugehörige Effizienz des Prototyps experimentell bestimmt. PGT: Für das neu vorgeschlagene Messverfahren PGT wurden im Rahmen dieser Arbeit die theoretischen Grundlagen ausgearbeitet und dargestellt. Darauf basierend, wurde ein Monte Carlo (MC) Code entwickelt, welcher die Modellierung von PGT Spektren ermöglicht. Am Protonenstrahl des Kernfysisch Verschneller Institut (KVI), Groningen, Niederlande, wurden zeitaufgelöste Spektren prompter Gammastrahlung aufgenommen und analysiert. Durch einen Vergleich von experimentellen und modellierten Daten konnte die Gültigkeit der vorgelegten theoretischen Überlegungen quantitativ bestätigt werden. Anhand eines hypothetischen Bestrahlungsszenarios wurde gezeigt, dass der statistische Fehler in der Bestimmung der Reichweite mit einer Genauigkeit von 5 mm bei einem Konfidenzniveau von 90 % für einen einzelnen starken Spot 5x10E8 Protonen mit PGT erreichbar ist. Schlussfolgerungen Für den Compton Kamera Prototyp wurde gezeigt, dass eine Bildgebung für Gamma-Energien einiger MeV, wie sie bei prompter Gammastrahlung auftreten, möglich ist. Allerdings erlaubt die prinzipielle Abbildbarkeit noch keine Nutzbarkeit unter therapeutischen Strahlbedingungen nicht. Der wesentliche und in dieser Arbeit nachgewiesene Hinderungsgrund liegt in der niedrigen (gemessenen) Nachweiseffizienz, welche die Anzahl der validen Daten, die für die Bildrekonstruktion genutzt werden können, drastisch einschränkt. PGT basiert, im Gegensatz zur Compton Kamera, auf einem einfachen zeit-spektroskopischen Messaufbau. Die kollimatorfreie Messmethode erlaubt eine gute Nachweiseffizienz und kann somit den statistischen Fehler bei der Reichweitenbestimmung auf ein klinisch relevantes Niveau reduzieren. Die guten Ergebnissen und die ausgeführten Abschätzungen für therapeutische Bedingungen lassen erwarten, dass PGT als Grundlage für eine Bestrahlungsverifiktation in-vivo und in Echtzeit zügig klinisch umgesetzt werden kann.:1. Particle therapy 1.1 Introduction 1.2 The problem of particle range uncertainty 1.3 Currently investigated methods for treatment verification 1.4 Methods for prompt gamma-ray based treatment verification 1.4.1 Prompt gamma-ray imaging (PGI) 1.4.2 Prompt gamma-ray timing (PGT) 2. Physical relations 2.1 Interactions of protons with matter 2.1.1 Stopping of protons 2.1.2 Multiple Coulomb scattering (MCS) 2.1.3 Nonelastic collisions 2.2 Definition of deposited dose and proton range 2.2.1 Definition of dose D 2.2.2 The dose depth Dx , the proton fluence Φ, and the Bragg peak 2.2.3 The particle range 2.3 Production and delivery of proton beams 2.3.1 Acceleration of protons in a isochronous cyclotron 2.3.2 Beam delivery 2.4 Prompt gamma-ray emission 2.4.1 The production of prompt gamma-rays via nonelastic nuclear interactions 2.5 Interactions of photons with matter 2.5.1 Photoelectric absorption 2.5.2 Compton scattering 2.5.3 Pair production 2.5.4 Mass attenuation coefficient μ/ρ 2.6 Detection of photons 2.6.1 Semiconductor detectors 2.6.2 Scintillation detectors 3 Tests of a Compton camera for PGI 3.1 Principle of operation 3.2 Status of preceding work 3.3 Modifications to the existing Compton imaging prototype 3.4 Detectors of the prototype 3.4.1 The CZT scatter plane 3.4.2 The BGO absorber plane 3.4.3 The Compton imaging prototype 3.5 Electronic readout and event generation 3.6 Detector calibration 3.6.1 Calibration of the CZT detector 3.6.2 Calibration of a BGO detector 3.7 Compton imaging at 1.275 MeV photon energy 3.7.1 Imaging setup 3.7.2 Coincident timing 3.7.3 Coincident energy deposition 3.7.4 Image reconstruction 3.8 Compton imaging at 4.44 MeV photon energy 3.8.1 Beam setup at the Tandetron accelerator 3.8.2 Beam tuning at the Tandetron accelerator 3.8.3 The gamma-ray emission yield 3.8.4 Measurement setup 3.8.5 Energy detection 3.8.6 Spatial detection 3.8.7 Coincident timing 3.8.8 Coincident energy deposition 3.8.9 Detection efficiency η 3.8.10 Imaging setup 3.8.11 Image reconstruction 3.9 Implications for a therapeutic Compton imaging scenario 3.10 Summary and discussion 4 Prompt gamma-ray timing (PGT) 4.1 Theoretical description of PGT 4.1.1 Timing of prompt gamma-ray emission 4.1.2 Kinematics of protons 4.1.3 The correlation between spatial and temporal prompt gamma-ray emission in a thick target 4.1.4 Setup for time-resolved measurements of prompt gamma-rays 4.1.5 Uncertainty of the reference time 4.1.6 Standard error of the mean and confidence intervals of statistical momenta 4.1.7 A simplified MC method for the modeling of PGT 4.2 Experimental results 4.2.1 The GAGG detector 4.2.2 Detector energy resolution 4.2.3 Detector time resolution with 60-Co 4.2.4 Energy-resolved detector time resolution - the ELBE experiment 4.2.5 The KVI-CART proton beam line 4.2.6 Time-resolved measurement of prompt gamma-rays 4.2.7 Experimental determination of the system time resolution σ 4.2.8 PGT in dependence of proton transit time 4.3 Towards treatment verification with PGT 4.3.1 MC based PGT in dependence of proton range 4.3.2 MC based PGT at inhomogeneous targets 4.4 Implications for a therapeutic PGT scenario 4.4.1 Range verification for an exemplary PGT setup 4.4.2 Practical restrictions for the therapeutic PGT scenario 4.4.3 Principal limitations of the PGT method 4.5 Summary and outlook 5 Discussion Summary Zusammenfassung Bibliography Acknowledgement
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie