Siga este link para ver outros tipos de publicações sobre o tema: Cosmological analysis.

Teses / dissertações sobre o tema "Cosmological analysis"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Cosmological analysis".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Pearson, Russell Charles. "The statistical analysis of cosmological models". Thesis, Queen Mary, University of London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264659.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Rivera, Echeverri José David [UNESP]. "Cosmological analysis of optical galaxy clusters". Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/152493.

Texto completo da fonte
Resumo:
Submitted by JOSE DAVID RIVERA ECHEVERRI null (josedavidr@ift.unesp.br) on 2018-01-15T19:59:25Z No. of bitstreams: 1 tese_Jose_Rivera.pdf: 8882823 bytes, checksum: 86e911bf64637bfd689e529e9fd65fbe (MD5)
Approved for entry into archive by Hellen Sayuri Sato null (hellen@ift.unesp.br) on 2018-01-16T17:39:58Z (GMT) No. of bitstreams: 1 riveraecheverri_jd_dr_ift.pdf: 8882823 bytes, checksum: 86e911bf64637bfd689e529e9fd65fbe (MD5)
Made available in DSpace on 2018-01-16T17:39:58Z (GMT). No. of bitstreams: 1 riveraecheverri_jd_dr_ift.pdf: 8882823 bytes, checksum: 86e911bf64637bfd689e529e9fd65fbe (MD5) Previous issue date: 2017-12-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Os aglomerados de galáxias são os maiores objetos ligados que observamos no universo. Dado que as galáxias são consideradas traçadores de matéria escura, os aglomerados de galáxias nos permitem estudar a formação e a evolução de estruturas em grande escala. As contagens do número de aglomerados de galáxias são sensı́veis ao modelo cosmológico, portanto são usadas como observáveis para restringir os parâmetros cosmológicos. Nesta tese estudamos os aglomerados de galáxias óticos. Iniciamos o trabalho analisando a degradação da precisão e a exatidão no desvio para o vermelho fotométrico estimado através de métodos de aprendizagem de máquina (machine learning) ANNz2 e GPz. Além do valor singular do desvio para o vermelho fotométrico clássico (isto é, valor médio ou máximo da distribuição), implementamos um estimador baseado em uma amostragem de Monte Carlo usando a função de distribuição cumulativa. Mostramos que este estimador para o algoritmo ANNz2 apresenta a melhor concorância com a distribuição do desvio para o vermelho espectroscópico, no entanto, uma maior dispersão. Por outro lado, apresentamos o buscador de aglomerados VT-FOFz, o qual combina as técnicas de Voronoi Tessellation e Friends of Friends. Estimamos seu desempenho através de catálogos simulados. Calculamos a completeza e a pureza usando uma região de cilindrica no espaço 2+1 (ou seja, coordenadas angulares e desvio para o vermelho). Para halos maciços e aglomerados com alta riqueza, obtemos valores elevados de completeza e pureza. Comparamos os grupos de galáxias detectados através do buscador de aglomera- dos VT-FOFz com o catálogo RedMaPPer SDSS DR8. Recuperamos ∼ 90% dos aglomerados de galáxias do catálogo RedMaPPer até o desvio para o vermelho de z ≈ 0.33 considerando galáxias mais brilhantes com r < 20.6. Finalmente, realizamos uma previsão cosmológica usando um método MCMC para um modelo plano de wCDM por meio da abundância de aglomerados de galáxias. O modelo fiducial é um universo ΛCDM plano. Os efeitos devidos à massa observável estimada e aos deslocamentos para o vermelho fotométricos são incluı́dos através de um modelo de auto-calibração. Empregamos a função de massa de Tinker para estimar o número de contagens em uma faixa de massa e um bin de deslocamento para o vermelho. Assumimos que a riqueza e a massa do aglomerado estejam relacionadas através de uma lei de potência. Recuperamos os valores fiduciais com nı́vel de confiança de até 2σ para os testes considerados.
The galaxy clusters are the largest bound objects observed in the universe. Given that the galaxies are considered as tracers of dark matter, the galaxy clusters allow us to study the formation and evolution of large-scale structures. The cluster number counts are sensitive to the cosmological model, hence they are used as probes to constrain the cosmological parameters. In this work we focus on the study of optical galaxy clusters. We start analyzing the degradation of both precision and accuracy in the estimated photometric redshift via ANNz2 and GPz machine learning methods. In addition to the classical singular value for the photometric redshift (i.e., mean value or maximum of the distribution), we implement an estimator based on a Monte Carlo sampling by using the cumulative distribution function. We show that this estimator for the ANNz2 algorithm presents the best agreement with the distribution for spectroscopic redshift, nonetheless a higher scattering. On the other hand, we present the VT-FOFz cluster finder, which combines the techniques Voronoi Tessellation and Friends of Friends. Through mock catalogs, we estimate its performance. We compute the completeness and purity by using a cylindrical region in the 2+1 space (i.e., angular coordinates and redshift). For massive haloes and clusters with high richness, we obtain high values of completeness and purity. We compare the detected galaxy clusters via the VT-FOFz cluster finder with the redMaPPer SDSS DR8 cluster catalog. We recover ∼ 90% of the galaxy clusters of the redMaPPer catalog until the redshift z ≈ 0.33 considering brighter galaxies with r < 20.6. Finally, we perform a cosmological forecasting by using a MCMC method, for a flat wCDM model through galaxy cluster abundance. The fiducial model is a flat ΛCDM Universe. The effects due to the estimated observable mass and the photometric redshifts are included via a self-calibriation model. We employ the Tinker’s mass function to estimate the number counts in a range of mass and a redshift bin. We assume that the richness and the cluster mass are related through a power law. We recover the fiducial values at 2σ confindence level for the considered tests.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Rivera, Echeverri José David. "Cosmological analysis of optical galaxy clusters /". São Paulo, 2017. http://hdl.handle.net/11449/152493.

Texto completo da fonte
Resumo:
Orientador: Maria Cristina Batoni Abdalla Ribeiro
Coorientador: Filipe Batoni Abdalla
Banca: Filipe Batoni Abdalla
Banca: Laerte Sodré Júnior
Banca: Marcos Vinícius Borges Teixeira Lima
Banca: Martín Makler
Resumo: Os aglomerados de galáxias são os maiores objetos ligados que observamos no universo. Dado que as galáxias são consideradas traçadores de matéria escura, os aglomerados de galáxias nos permitem estudar a formação e a evolução de estruturas em grande escala. As contagens do número de aglomerados de galáxias são sensı́veis ao modelo cosmológico, portanto são usadas como observáveis para restringir os parâmetros cosmológicos. Nesta tese estudamos os aglomerados de galáxias óticos. Iniciamos o trabalho analisando a degradação da precisão e a exatidão no desvio para o vermelho fotométrico estimado através de métodos de aprendizagem de máquina (machine learning) ANNz2 e GPz. Além do valor singular do desvio para o vermelho fotométrico clássico (isto é, valor médio ou máximo da distribuição), implementamos um estimador baseado em uma amostragem de Monte Carlo usando a função de distribuição cumulativa. Mostramos que este estimador para o algoritmo ANNz2 apresenta a melhor concorância com a distribuição do desvio para o vermelho espectroscópico, no entanto, uma maior dispersão. Por outro lado, apresentamos o buscador de aglomerados VT-FOFz, o qual combina as técnicas de Voronoi Tessellation e Friends of Friends. Estimamos seu desempenho através de catálogos simulados. Calculamos a completeza e a pureza usando uma região de cilindrica no espaço 2+1 (ou seja, coordenadas angulares e desvio para o vermelho). Para halos maciços e aglomerados com alta riqueza, obtemos valores elevados de ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The galaxy clusters are the largest bound objects observed in the universe. Given that the galaxies are considered as tracers of dark matter, the galaxy clusters allow us to study the formation and evolution of large-scale structures. The cluster number counts are sensitive to the cosmological model, hence they are used as probes to constrain the cosmological parameters. In this work we focus on the study of optical galaxy clusters. We start analyzing the degradation of both precision and accuracy in the estimated photometric redshift via ANNz2 and GPz machine learning methods. In addition to the classical singular value for the photometric redshift (i.e., mean value or maximum of the distribution), we implement an estimator based on a Monte Carlo sampling by using the cumulative distribution function. We show that this estimator for the ANNz2 algorithm presents the best agreement with the distribution for spectroscopic redshift, nonetheless a higher scattering. On the other hand, we present the VT-FOFz cluster finder, which combines the techniques Voronoi Tessellation and Friends of Friends. Through mock catalogs, we estimate its performance. We compute the completeness and purity by using a cylindrical region in the 2+1 space (i.e., angular coordinates and redshift). For massive haloes and clusters with high richness, we obtain high values of completeness and purity. We compare the detected galaxy clusters via the VT-FOFz cluster finder with the redMaPPer SDSS DR8 cluster catalog. We recover ∼ 90% of the galaxy clusters of the redMaPPer catalog until the redshift z ≈ 0.33 considering brighter galaxies with r < 20.6. Finally, we perform a cosmological forecasting by using a MCMC method, for a flat wCDM model through galaxy cluster abundance. The fiducial model is a flat ΛCDM Universe. The effects due to the estimated observable mass and (Complete abstract click electronic access below)
Doutor
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Stadel, Joachim Gerhard. "Cosmological N-body simulations and their analysis /". Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5449.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

McEwen, Jason Douglas. "Analysis of cosmological observations on the celestial sphere". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612795.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Porredon, Diez de Tejada Anna Maria. "Multi-probe cosmological analysis with the dark energy survey". Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669428.

Texto completo da fonte
Resumo:
Les actuals i futures cartografies de galàxies cobriran un major volum i recolliran imatges de millor qualitat que les obtingudes fins ara, permetent mesures precises de la última etapa de l'Univers i proves potents de la naturalesa de l’energia fosca i la Relativitat General. Aquests cartografiats ens permetran obtenir informació cosmològica a partir de diversos observables, i la combinació d'aquests pot millorar la robustesa i precisió dels resultats de l'anàlisi. Aquesta tesi es centra en la combinació de diferents traçadors de l'estructura a gran escala (LSS) per tal d'obtenir mesures dels paràmetres cosmològics més precises. Primer, combinem la distribució de galàxies de la mostra corresponent al primer any de dades del Dark Energy Survey (DES Y1) amb la distorsió de la radiació còsmica de fons (CMB) deguda a la matèria fosca on viuen aquestes galàxies (CMB lensing) provinent de la combinació òptima entre el South Pole Telescope i Planck. En aquest anàlisi obtenim mesures en el biaix galàctic, la funció de creixement de LSS i paràmetres cosmològics. Els nostres resultats són consistents amb el model estàndard (\LCDM) i altres resultats de DES Y1. La informació que podem obtenir de l'anàlisi està limitada per les restriccions d'escala, però esperem que millori en els futurs anàlisis. A continuació, combinem la distribució de galàxies de dues mostres diferents (en l'anomenat mètode multi-traçador) per analitzar els senyals de distorsió en la distribució de galàxies deguda a les seves velocitats ``peculiars'' (RSD) i de la possible contribució no-Gaussiana a les fluctuacions inicials de la distribució de matèria (PNG). Per a aquest propòsit, considerem un parell de mostres optimistes (amb gran nombre de galàxies i diferències en el biaix) i les mostres de DES del 3r any d'observacions (DES Y3) que actuaran con a lents gravitatòries. Trobem que les mesures en RSD poden millorar un factor cinc a distàncies properes i les mesures en PNG un factor tres o més. També comprovem l'impacte d'incloure CMB lensing en l'anàlisi, en el qual no variem la cosmologia, trobant que majoritàriament millora les mesures en el biaix. Per últim, definim i optimitzem una de galàxies limitada en magnitud que s'utilitzarà per les mesures de la distribució de galàxies en l'anàlisi de DES Y3, actuant com a lent gravitacional respecte a galàxies més llunyanes (lens sample). Per aquest propòsit, ens basem en previsions de Fisher i posem a prova com aquestes canvien al variar el nombre de galàxies i la incertesa en la posició donat un conjunt de talls en magnitud. També caracteritzem l'impacte en els nostres resultats de diferents particions de la mostra en intervals de desplaçament al vermell (redshift), tant per aquesta mostra com per l'altra lens sample (REDMAGIC). Finalment, les nostres previsions mostren que amb aquesta mostra que hem optimitzat podem obtenir mesures un 15\% més precises que amb REDMAGIC.
Ongoing and future photometric surveys will enable detailed measurements of the late-time Universe and powerful tests of the nature of dark energy and General Relativity. These surveys will be able to obtain cosmological constraints from multiple probes, and the combination of these probes can improve their robustness and constraining power. This thesis is focused on the combination of multiple tracers of large-scale structure (LSS) to obtain tighter cosmological constraints. First, we combine the galaxy clustering from the Dark Energy Survey (DES) Year 1 (Y1) data with CMB lensing from the optimal combination of South Pole Telescope (SPT) and Planck, obtaining constraints on the galaxy bias, the growth function and the cosmological parameters. Our results are consistent with \LCDM$ $ and other measurements of DES Y1. However, their constraining power is limited due to conservative scale cuts. We expect an improved signal-to-noise in future analyses. We then combine the galaxy clustering of two different galaxy samples (the so-called multi-tracer approach) to explore the constraints on redshift space distortions (RSD) and primordial non-Gaussianities (PNG). For this purpose, we consider a pair of optimistic samples (with large bias differences and number densities) and the DES Year 3 (Y3) lens samples. We find that the constraints on RSD can be improved a factor of five at low redshift with respect to a single tracer, and the constraints on PNG can be improved more than a factor three. We also test the impact of including CMB lensing cross-correlations in our analysis, in which we keep the cosmology fixed, finding it mainly improves the galaxy bias constraints. Last, we define and optimize a magnitude limited galaxy sample to be used for the galaxy clustering measurements in the DES Y3 analysis, in combination with galaxy-galaxy lensing. We rely on Fisher forecasts, and we test how these change given the variations obtained for the number density and estimated redshift uncertainty for a set of magnitude cuts. We also characterize the impact of redshift binning choices in our cosmological constraints for this sample and the other DES Y3 lens sample: REDMAGIC. Finally, our forecasts show that we can potentially obtain 15\% tighter constraints with this magnitude limited sample, compared to REDMAGIC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lin, Qiufan. "Learning salient information with neural networks for cosmological analysis". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0532.

Texto completo da fonte
Resumo:
Les réseaux de neurones avec apprentissage profond sont des outils puissants permettant de capturer des informations à partir de données. Cependant, les réseaux de neurones sont enclins à s'adapter à des informations spécifiques qui sont enchevêtrées avec les informations saillantes que vont exploiter certaines tâches, ce qui biaiserait les résultats. Dans le but de développer des outils d'apprentissage profond robustes en préparation des futures sondages cosmologiques, cette thèse se concentre sur l'apprentissage d'informations saillantes à partir d'images multicolores avec des réseaux de neurones. Plus précisément, nous tentons d'établir des représentations informatives des données afin de capturer des informations saillantes à différents niveaux d'abstraction dans quelques tâches : À faible niveau d'abstraction, nous effectuons une traduction bidirectionnelle semi-supervisée d'images de galaxies multicolores entre deux sondages. À un niveau élevé d'abstraction, nous proposons une procédure pour corriger les biais d'estimation des méthodes basées sur les données en utilisant une représentation pré-entraînée. En plus des études astrophysiques, notre travail interdisciplinaire explore l'analyse d'images optiques sous-marines auxquelles les techniques développées en astrophysique peuvent être appliquées. Comme de nouveaux défis ont été rencontrés dans le traitement des données astronomiques, principalement le bruit et la rareté des données, notre travail implique la nécessité de faire progresser les techniques d'apprentissage automatique pour s'adapter aux problèmes réels et optimiser l'exploitation des données
Deep learning neural networks are powerful data-driven tools to capture information from data. However, neural networks are prone to fitting on specific information that is entangled with the salient information concerned for certain tasks, which would bias the output prediction of a model. Aiming at developing robust deep learning tools in preparation for future cosmological surveys, this thesis focuses on learning salient information from multi-color images with neural networks. In specific, we attempt to establish informative representations of data in order to capture salient information at different levels of abstraction in a few tasks: At a low level, we perform semi-supervised two-way translation of multi-color galaxy images between two surveys. At a high level, we propose a procedure to correct estimation biases for data-driven methods using a pre-trained representation. In addition to astrophysical studies, our interdisciplinary work explores the analysis of underwater optical images in which the techniques developed in astrophysics can be applied. As new challenges have been encountered in dealing with astronomical data, predominantly noise and sparsity of data, our work implies the needs for advances of machine learning techniques to fit real problems and optimize the exploitation of data
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Hollowood, Devon. "Cosmological Studies through Large-Scale Distributed Analysis of Chandra Observations". Thesis, University of California, Santa Cruz, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10973254.

Texto completo da fonte
Resumo:

The formation history of galaxy clusters is a powerful probe of cosmology. In particular, one may place strong constraints on the dark energy equation of state by examining the evolution across redshift of the number density of galaxy clusters as a function of mass. In this thesis, I describe my contributions to cluster cosmology, in particular to the development of the richness optical observable mass proxy.

I introduce redMaPPer, an optical cluster finder which represents an important upstream input for my thesis work. I next introduce the Mass Analysis Tool for Cha ndra (MATCha), a pipeline which uses a parallelized algorithm to analyze archival Chandra data. MATCha simultaneously calculates X-ray temperatures and luminosities and performs centering measurements for hundreds of potential galaxy clusters using archival X-ray exposures. I run MATCha on the redMaPPer SDSS DR8 cluster catalog and use MATCha's output X-ray temperatures and luminosities to analyze the galaxy cluster temperature-richness, luminosity-richness, luminosity-temperature, and temperature-luminosity scaling relations. I investigate the distribution of offsets between the X-ray center and redMaPPer center within 0.1 < z < 0.35 and explore some of the causes of redMaPPer miscentering. I collaborate with members of the Dark Energy Survey in order to repeat this analysis on Dark Energy Survey Year 1 data. I outline the various ways in which MATCha constitutes an important upstream work for a variety of astrophysical applications. These include the calibrations of two separate mass proxies, the study of the AGN fraction of galaxy clusters, and cosmology from cluster number densities and stacked weak lensing masses. Finally, I outline future upgrades and applications for MATCha throughout the lifespan of the Dark Energy Survey and the Large Synoptic Survey Telescope.

Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Alfedeel, Alnadhief Hamed Ahmed. "The impact of inhomogeneity on the analysis of cosmological data". Doctoral thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/4952.

Texto completo da fonte
Resumo:
Includes abstract.
Includes bibliographical references.
We consider the Lemaˆıtre metric, which is the inhomogeneous, spherically symmetric metric, containing a non-static, comoving, perfect fluid with non-zero pressure. We use it to generalise the metric of the cosmos algorithm, first derived for the zero-pressure Lemaˆıtre-Tolman (LT) metric, to the case of non-zero pressure and non-zero cosmological constant. We present a method of integration with respect to the null coordinate w, instead of comoving t, and reduce the Einstein’s Field Equation (EFEs) to a system of differential equations (DEs). We show that the non-zero pressure introduces new functions, and makes several functions depend on time that did not in the case of LT. We present clearly, step by step an algorithmic solution for determining the metric of the cosmos from cosmological data for the Lemaˆıtre model, on which a numerical implementation can be based. In our numerical execution of the algorithm we have shown that there are some regions which need special treatment : the origin and the maximum in the diameter distance. We have coded a set of MATLAB programs for the numerical implementation of this algorithm, for the case of pressure with a barotropic equation of state and non-zero Λ. Initially, the computer code has been successfully tested using artificial and ideal cosmological data on the observer’s past null cone, for homogeneous and non-homogeneous spacetimes. Then the program has also been generalized to handle realistic data, which has statistical fluctuations. A key step is the data smoothing process, which fits a smooth curve to discrete data with statistical fluctuations, so that the integration of the DEs can proceed. Since the algorithm is very sensitive to the second derivative of one of the data functions, this has required some experimentation with methods. Finally, we have successfully extracted the metric functions for the Lemaˆıtre model, and their evolution from the initial data on the past null cone.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

PIETROBON, DAVIDE. "Making the best of cosmological perturbations: theory and data analysis". Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1197.

Texto completo da fonte
Resumo:
Grazie a esperimenti estremamente raffinati, la cosmologia moderna si trova oggi in quella che puo' essere defi nita l' epoca della precisione. I cosmologi dispongono di una grande quatita' di strumenti per testare il cosiddetto modello cosmologico di concordanza e vincolarne i principali parametri. In particolare, la radiazione cosmica di fondo (CMB) ha svolto, e svolge tutt'oggi, un ruolo chiave in questo ambito. Numerose domande rimangono tuttavia ancora in attesa di una risposta, in particolare quelle che riguardano la fisica dell'inflazione, che ha governato l'Universo nelle prime fasi di evoluzione, e la natura dell' accelerazione dell' espansione dell'Universo, che e' stata osservata negli ultimi anni. La mia attivita' di ricerca ha contribuito ad approfondire lo studio e la conoscenza su entrambe le tematiche, che sono state accomunate dallo sviluppo e dall'utilizzo delle needlets - una nuova "frame" sulla sfera - per analizzare la CMB. Con questo strumento, abbiamo misurato l'effetto Sachs-Wolfe integrato, correlando i dati di WMAP e NVSS, e caratterizzato le proprieta' della dark energy, seguendo un approccio fenomenologico che si basa sull'approssimazione di fl uido perfetto. Stimolati dai risultati ottenuti, abbiamo studiato in dettaglio un modello unifi cato per le componenti di dark energy e dark matter, che fa uso di un'equazione di stato affi ne, e investigato i vincoli sui parametri di questo modello provenienti da WMAP e SDSS. Abbiamo quindi applicato le needlets ai dati di WMAP 5-anni allo scopo di studiare la gaussianita' della distribuzione delle perturbazioni della CMB. Ci siamo dapprima concentrati sulle mappe, rilevando la presenza di regioni anomale, localizzate nell' emisfero meridionale, e studiando l'effetto che queste regioni hanno sullo spettro di potenza angolare. Successivamente, abbiamo misurato la funzione di correlazione a tre punti (bispettro) delle needlets caratterizzandola in termini della sua ampiezza complessiva, descritta dal parametro fNL, e secondo la geometria delle configurazioni triangolari che contribuiscono al segnale totale. Abbiamo misurato una significativa anomalia nelle confi gurazioni isosceli, nuovamente presente nell'emisfero meridionale. Infine, ci siamo concentrati sulla costruzione di un estimatore per il bispettro delle needlets, includendo l'effetto spurio che puo' essere introdotto dalla presenza di eventuale segnale residuo, proveniente da sorgenti di natura prevalentemente Galattica.
Cosmology has entered the precision epoch thanks to several very accurate experiments. Cosmologists now have access to an array of tools to test the cosmological concordance model and constrain its parameters; the Cosmic Microwave Background radiation (CMB), in particular, has been playing a crucial role in this ambition. Many questions remain nonetheless unanswered, especially concerning the physics of the early Universe, the infl ationary mechanism which set the initial conditions for the Universe expansion, and the nature of the late time acceleration of the Universe expansion. My research contributes to both of these sub jects, the common ground being the development of a statistical tool - needlets, a new "frame" on the sphere - to analyse the CMB. By means of needlets, we measure the Integrated Sachs Wolfe effect by cross-correlating WMAP and NVSS datasets and characterise dark energy properties using a phenomenological fluid model. Motivated by our findings, we study in detail a parameterisation of the dark components, dark matter and dark energy, which makes use of an affine equation of state, constraining the parameters of the model by combining WMAP and SDSS datasets. We apply needlets to the WMAP 5-year data release testing the Gaussianity of the CMB perturbations. Our approach is twofold: we first focus on the maps, detecting anomalous spots located in the southern hemisphere and check their effect on the angular power spectrum. We next measure the needlet three-point correlation function (bispectrum) and characterise it in terms of its overall amplitude, constraining the primordial fNL parameter, and considering its properties according to the geometry of the triangle configurations which contribute to the total power. We find a significant anomaly in the isosceles confi gurations, again in the southern hemisphere. Finally we focus on the construction of an optimal estimator for the (needlets) bispectrum, taking into account foreground residuals due mainly to Galactic emission.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Faerber, Timothy. "Chi-Squared Analysis of Measurements of Two Cosmological Parameters Over Time". Thesis, Uppsala universitet, Observationell astrofysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-399031.

Texto completo da fonte
Resumo:
For this project, a historical statistical analysis of the Amplitude of Mass Fluctuations ($\sigma_8$) and Hubble's Constant ($H_0$) parameters in the Standard Cosmological Model was carried out to determine whether or not the given error bars truly represent the dispersion of values. It was found through analysis of the Chi-Squared ($\chi^2$) values of the data that for $\sigma_8$ (60 data points and $\chi^2$ between 183.167 and 189.037) that the associated probability Q is extremely low, with $Q = 1.5597*10^{-15}$ for the weighted average and $Q = 1.2107*10^{-14}$ for the best fit of the data. This was also the case for the $\chi^2$ values (163 data points and $\chi^2$ between 484.3977 and 575.655) of $H_0$, where $Q = 4.2176*10^{-34}$ for the linear fit of the data and $Q = 1.0342*10^{-47}$ for the weighted average of the data. Through further analysis, it is shown in question, a linear fit is a better estimate of the data than the weighted average. The general conclusion is that the statistical error bars have been underestimated (in around 20\% of the measurements), or the systematic errors were not properly taken into account.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Humphreys, Neil Paul. "Obervational analysis of the inhomogeneous universe". Thesis, University of Portsmouth, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310380.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Sedin, Victor. "Stability Analysis of Equilibrium Points and Symmetry Curves in Discrete Cosmological Models". Thesis, KTH, Fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208202.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Pavlov, Anatoly. "Constraining competing models of dark energy with cosmological observations". Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20345.

Texto completo da fonte
Resumo:
Doctor of Philosophy
Department of Physics
Bharat Ratra
The last decade of the 20th century was marked by the discovery of the accelerated expansion of the universe. This discovery puzzles physicists and has yet to be fully understood. It contradicts the conventional theory of gravity, i.e. Einstein’s General Relativity (GR). According to GR, a universe filled with dark matter and ordinary matter, i.e. baryons, leptons, and photons, can only expand with deceleration. Two approaches have been developed to study this phenomenon. One attempt is to assume that GR might not be the correct description of gravity, hence a modified theory of gravity has to be developed to account for the observed acceleration of the universe’s expansion. This approach is known as the ”Modified Gravity Theory”. The other way is to assume that the energy budget of the universe has one more component which causes expansion of space with acceleration on large scales. Dark Energy (DE) was introduced as a hypothetical type of energy homogeneously filling the entire universe and very weakly or not at all interacting with ordinary and dark matter. Observational data suggest that if DE is assumed then its contribution to the energy budget of the universe at the current epoch should be about 70% of the total energy density of the universe. In the standard cosmological model a DE term is introduced into the Einstein GR equations through the cosmological constant, a constant in time and space, and proportional to the metric tensor g[subscript]mu[subscript]nu. While this model so far fits most available observational data, it has some significant conceptual shortcomings. Hence there are a number of alternative cosmological models of DE in which the dark energy density is allowed to vary in time and space.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Tarr, Michael. "Techniques for cosmological analysis of next generation low to mid-frequency radio data". Thesis, University of Portsmouth, 2018. https://researchportal.port.ac.uk/portal/en/theses/techniques-for-cosmological-analysis-of-next-generation-low-to-midfrequency-radio-data(2adc3934-dd27-4dc2-b846-d508136faf8c).html.

Texto completo da fonte
Resumo:
For centuries, astronomical instrumentation has continued to develop and improve. This has led to the incredible precision seen in recent observations, with ever greater accuracy to come. From their measurements, cosmologists have produced a complex model of the Universe. This indicates that most of the energy in the Universe is in some “dark” form that does not interact directly with the electromagnetic spectrum. One of the greatest challenges of modern cosmology is the understanding of these dark components. In particular weak gravitational lensing has emerged as powerful tool for probing these parts of the cosmological model. An area of observation which has undergone an accelerated improvement in recent years, is radio astronomy. The radio regime therefore, is an exciting source of new discoveries and further cosmological constraints. Several studies have shown that the lensing signal can be detected at radio wavelengths, while others have demonstrated the significant impact of combining the results from next-generation radio and optical telescopes. In order to fully realise the potential of lensing and other measurements for cosmology, a comprehensive understanding of the radio galaxy population is also required. In this thesis, I consider the future of radio observations for precision cosmology and the required developments in analysis techniques. I aim to contribute to this progress by focusing specifically on: (i) Providing novel estimators for radio weak lensing measurements and (ii) Studying the redshifts and statistics of galaxy populations in the radio, using optical spectroscopy. Both of these areas will soon be the subject of large next-generation observations, in the form of the Square Kilometre Array (SKA) continuum surveys; and a joint experiment between a next-generation spectroscopy facility (WEAVE) for the William Herschel Telescope (WHT) and the LOw Frequency ARray (LOFAR).
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Ndlovu, Ndukuyakhe. "A comparative analysis of rock art in southern Africa : animals and cosmological models". Thesis, University of Newcastle Upon Tyne, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613627.

Texto completo da fonte
Resumo:
This dissertation examines the animal pattern derived from the detailed statistical analysis I carried out of the published rock art data, the data from Brandberg/Daureb region, and fieldwork I conducted in the northern parts of KwaZulu-Natal province, South Africa. This study, which highlights the value of quantitative analysis and attempts to find alternative interpretations for animal representation in rock art, is the first overview of the distribution of animal in southern African rock art. It provides an opportunity to explain an aspect of the art that has not previously received much attention in southern African rock art studies. While it has been previously noted by different rock art researchers that there is animal variation in terms of which animals are dominant species throughout southern Africa, this study provides the quantitative extent information on how far into which such variation exist. In attempting to provide an explanation for for variation in the representation of animals animal variation in southern African rock art, I use two main approaches: (i) correlation between most represented animals in the art and the ten vegetation biomes of southern Africa, and (ii) the application of the an adapted four-cell matrix model aimed at assessing alternative interpretations of rock art, in particular, the totemic and secular interpretations. Through the first approach, I show that there is a reasonable correlation between the areas where each of the four animals (eland, giraffe, kudu, and springbok) dominates (eland, giraffe, kudu, and springbok) and their known natural distribution. This is confirmed by their historically recorded natural distribution and the by faunal records. By applying the modified four-cell matrix model to rock art data from six regions, I establish that one animal in each takes the central predominance in each. This is a characteristic of shamanic communities. However, the major deviation is that the four-cell derived interpretation does not overly emphasise the dominance of eland to the exclusion of other animals that are well represented in the art.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Mainetti, Gabriele. "The Herschel Space Observatory: Pipeline development and early cosmological surveys results". Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3427432.

Texto completo da fonte
Resumo:
This thesis deals with al l technical and scientific issues related to the use of infrared and sub-mil limeter wavelengths in astronomy. In particular, the thesis focuses on the HerschelSpace Observatory with particular attention to one of the instruments on board, SPIRE of which the design and components wil l be described in detail. The problems related to data reduction are presented in this Thesis in the central chapters: al l phases of the pipeline data reduction wil l be described, namely the transformation of raw telemetry data in “scientific” data usable by the astronomical community. The thesis wil l then deal with the aspects linked of analysis of these data, particularly with regard to the tools developed to analyze easily the large datasets released to the astronomical community. The scientific side of the issue wil l be discussed in the final chapters with particular attention to the so-cal led cosmological surveys. The thesis wil l focus then on the first results obtained in one of the largest projects designed to Herschel, i.e. HerMES, and on the application of a technique known as statistical analysis of the probability of deflection - P(D) to improve our knowledge about the number counts below the confusion limit. This thesis emphasizes, therefore, both technical aspects (with particular attention to al l the software infrastructure) of an important mission as the Herscheland the purely scientific aspects (formation and evolution of galaxies, observational cosmology) related to astronomical observations in the infrared and sub-mil limetric bands.
Questa Tesi tratta tutte le problematiche tecniche e scientifiche legate all’utilizzo delle lunghezze d’onda infrarosse e sub-millimetriche in astronomia. In particolare la Tesi si focalizza sull’Osservatorio Spaziale Herschel con particolare attenzione a uno degli strumenti a bordo, SPIRE del quale verranno descritti nel dettaglio il design e le componenti. Le problematiche legate alla riduzione dati vengono trattate in questa Tesi nei capitoli centrali nei quali si descriverà dettagliatamente il lavoro svolto in quest’ambito: verranno descritte tutte le fasi della pipeline di riduzione dati, cioè la trasformazione di dati grezzi di telemetria in dati “scien- tifici” usabili dalla comunità astronomica. La Tesi tratterà poi gli aspetti legati all’analisi di questi dati, con particolare riguardo agli strumenti sviluppati per analizzare facilmente i grandi datasets che compongono i “prodotti” finali delle osservazioni effettuate con Herschel. Il lato scientifico del problema verrà discusso nei capitoli finali con particolare attenzione per le cosiddette surveys cosmologiche. La Tesi si focalizzerà quindi sui primi risultati ottenuti nell’ambito di uno dei più grandi progetti pensati per Herschel, cioè HerMES e sulle applicazioni di una tecnica statistica nota come analisi della Probabilità delle Deflessioni - P(D) per ottenere informazioni accurate sull’andamento dei conteggi di sorgenti al di sotto del limite di confusione. Questa Tesi mette quindi in evidenza sia gli aspetti tecnici (con particolare attenzione a tutta l’infrastruttura software) di una missione im- portante come quella di Herschel, sia gli aspetti puramente scientifici (formazione ed evoluzione delle galassie, cosmologia osservative) legati alle osservazioni astro- nomiche nelle bande infrarosse e sub-millimetriche.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Narayan, Gautham Siddharth. "Light Curves of Type Ia Supernovae and Preliminary Cosmological Constraints from the ESSENCE Survey". Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10841.

Texto completo da fonte
Resumo:
The ESSENCE survey discovered 213 type Ia supernovae at redshifts 0.10 < z < 0.81 between 2002 and 2008. We present their R and I band light curve measurements, obtained using the MOSAIC II imager at the CTIO 4 m, along with rapid response spectroscopy for each object from a range of large aperture ground based telescopes. We detail our program to obtain quantitative classifications and precise redshifts from our spectroscopic follow-up of each object. We describe our efforts to improve the precision of the calibration of the CTIO 4 m natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. We assess the effect of various sources of systematic error on our measured fluxes, and estimate that the total systematic error budget from the photometric calibration is ~1%. We combine 108 ESSENCE SNIa that pass stringent quality cuts with a compilation of 441 SNIa from 3 year results presented by the Supernova Legacy Survey and Baryon Acoustic Oscillation measurements from the Sloan Digital Sky Survey to produce preliminary cosmological constraints employing the SNIa . This constitutes the largest sample of well-calibrated, spectroscopically confirmed SNIa to date. Assuming a flat Universe, we obtain a joint constraint of \(\Omega_M = 0.266^{+0.026}_{-0.016}(stat 1\sigma)\), and \(w = -1.112^{+0.069}_{-0.072}(stat 1\sigma)\). These measurements are consistent with a cosmological constant.
Physics
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Zarrouk, Pauline. "Clustering Analysis in Configuration Space and Cosmological Implications of the SDSS-IV eBOSS Quasar Sample". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS297/document.

Texto completo da fonte
Resumo:
Le modèle ΛCDM de la cosmologie repose sur l’existence d’une composante exotique, appelée énergie noire, pour expliquer l’accélération tardive de l’expansion de l’univers à z < 0.7. Des alternatives à cette constante cosmologique proposent de modifier la théorie de la gravitation basée sur la relativité générale aux échelles cosmologiques. Depuis l’automne 2014, le multi-spectrographe SDSS-eBOSS effectue un relevé de quasars dans un domaine en redshift peu exploré entre 0.8 ≤ z ≤ 2.2 dont l’un des objectifs majeurs est d’étendre les contraintes sur la nature de l’énergie noire et de tester la validité de la théorie de la relativité générale à plus haut redshift en utilisant les quasars comme traceurs de la matière.Dans cette thèse, nous mesurons et analysons la fonction de corrélation à deux points de l’échantillon de quasars obtenu après deux ans d'observation de eBOSS pour contraindre les distances cosmiques, à savoir la distance angulaire DA et le taux d'expansion H, ainsi que le taux de croissance des structures fσ8 à un redshift effectif Zeff = 1.52. Nous commençons par construire des catalogues des grandes structures qui prennent en compte la géométrie angulaire et radiale du relevé. Puis pour obtenir des contraintes robustes, nous identifions plusieurs sources d’effets systématiques, en particulier ceux liés à la modélisation et aux observations sont étudiées avec des « mock catalogues » dédiés qui correspondent à des réalisations fictives de l’échantillon de quasars eBOSS. Les paramètres cosmologiques de ces catalogues fictifs étant connus, ils sont utilisés comme référence pour tester notre procédure d’analyse. Les résultats de ce travail sur l’évolution des distances cosmiques sont compatibles avec les prédictions du modèle ΛCDM utilisant les paramètres de Planck et basé sur l’existence d’une constante cosmologique. La mesure du taux de croissance des structures est compatible avec la prédiction de ce modèle basé sur la relativité générale, ce qui étend ainsi la validité de la théorie aux échelles cosmologiques à grand redshift. Nous utilisons également notre mesure pour mettre à jour les contraintes sur les modèles d'extensions à ΛCDM et sur les scénarios de gravité modifiée. Ce travail de thèse constitue une première étude menée avec les données de quasars eBOSS et sera utilisée pour l’analyse de l’échantillon final à la fin 2019 ou l’on attend une amélioration de la précision statistique d’un facteur 2. Associé à BOSS, eBOSS ouvrira la voie pour les futurs programmes d’observation, comme le télescope au sol DESI et le satellite Euclid. Ces deux programmes sonderont intensivement l’époque de l’univers entre 1 < z < 2 en observant plusieurs millions de spectres, ce qui permettra d'améliorer d'un ordre de grandeur au moins les contraintes actuelles sur les paramètres cosmologiques
The ΛCDM model of cosmology assumes the existence of an exotic component, called dark energy, to explain the late-time acceleration of the expansion of the universe at redshift z < 0.7. Alternative scenarios to this cosmological constant suggest to modify the theory of gravitation based on general relativity at cosmological scales. Since fall 2014, the SDSS-IV eBOSS multi-object spectrograph has undertaken a survey of quasars in the almost unexplored redshift range 0.8 ≤ z ≤ 2.2 with the key science goal to complement the constraints on dark energy and extend the test of general relativity at higher redshifts by using quasars as direct tracers of the matter field.In this thesis work, we measure and analyse the two-point correlation function of the two-year data taking of eBOSS quasar sample to constrain the cosmic distances, i.e. the angular diameter distance DA and the expansion rate H, and the growth rate of structure fσ8 at an effective redshift Zeff = 1.52. First, we build large-scale structure catalogues that account for the angular and radial incompleteness of the survey. Then to obtain robust results, we investigate several potential systematics, in particular modeling and observational systematics are studied using dedicated mock catalogs which are fictional realizations of the data sample. These mocks are created with known cosmological parameters such that they are used as a benchmark to test the analysis pipeline. The results on the evolution of distances are consistent with the predictions for ΛCDM with Planck parameters assuming a cosmological constant. The measurement of the growth of structure is consistent with general relativity and hence extends its validity to higher redshift. We also provide updated constraints on extensions of ΛCDM and models of modified gravity. This study is a first use of eBOSS quasars as tracers of the matter field and will be included in the analysis of the final eBOSS sample at the end of 2019 with an expected improvement on the statistical precision of a factor 2. Together with BOSS, eBOSS will pave the way for future programs such as the ground-based Dark Energy Spectroscopic Instrument (DESI) and the space-based mission Euclid. Both programs will extensively probe the intermediate redshift range 1 < z < 2 with millions of spectra, improving the cosmological constraints by an order of magnitude with respect to current measurements
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Gigante, Lorenzo. "New cosmological constraints from a weak lensing analysis of the AMICO galaxy cluster catalogue in the Kilo-Degree Survey". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17583/.

Texto completo da fonte
Resumo:
The spatial and dynamical properties of the large-scale structures of the Universe provide one of the most powerful probes to constrain the cosmological framework. In particular, the gravitational lensing of galaxy clusters can be efficiently exploited to constrain the main cosmological parameters. In this work, we present new cosmological constraints inferred from the analysis of weak lensing shear data, extracted from the KiDS-DR3 photometric survey. The galaxy cluster sample used in this work was extracted using the Adaptive Matched Identifier of Clustered Objects (AMICO) algorithm (Bellagamba et al., 2018) by Maturi et al., 2018. This cluster catalogue has been constructed by selecting only the objects with signal-to-noise ratio larger than 3.5, in the redshift range 0.1 < z < 0.6. The final sample consists of 6972 galaxy clusters in total. A full Bayesian analysis (MCMCs) is performed to model the stacked shear profiles. We mainly exploit the clusters 2-halo term to constrain the matter density contrast Ω_m. We find that the adopted modelling is successful to assess both the cluster masses and the matter density parameter, when fitting shear profiles up to the largest scales probed. The derived masses of the cluster samples are in good agreement with those estimated by Bellagamba et al. (2019). Our work confirms the reliability of AMICO and of the KiDS-DR3 data. Moreover, our results provide a strong observational evidence of the 2-halo signal in the stacked gravitational lensing of galaxy clusters, and demonstrate the reliability of this probe for constraining cosmological model parameters. The main result of this thesis work is a new, robust constraint on Ω_m , assuming a flat ΛCDM cosmology. Specifically, we find a Ω_m= 0.285 ± 0.023, estimated from the full posterior probability distribution. This value is consistent and competitive with that estimated by WMAP9 (0.273 ± 0.049), but slightly in tension with Planck18 constraints, (0.317 ± 0.008).
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

SHARMA, ASHISH. "Searches of a Cosmological Gravitational-Wave Background with 3G Detectors: Probing the Very Early Universe". Doctoral thesis, Gran Sasso Science Institute, 2021. http://hdl.handle.net/20.500.12571/16201.

Texto completo da fonte
Resumo:
Gravitational-wave detection from compact binary coalescences with current gravitational-wave detectors has made a tremendous impact on our understanding of astrophysical sources and look forward to the stochastic gravitational-wave background (SGWB) signal detection. An SGWB comprises superpositions of numerous astrophysical and cosmological sources, providing a direct window to study early universe phenomena and fundamental physics. Observation of the early universe gravitational-wave background signal would be paramount. Till yet, there is no direct or statistical evidence of stochastic cosmological gravitational-wave background (SCGWB) signal apart from upper limits has placed on the associated parameters after analyzing the advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) and advanced Virgo (AdV) data. With the proposed third-generation ground-based gravitational-wave detectors, Einstein Telescope and Cosmic Explorer, we might be able to detect evidence of an SCGWB. However, digging out these prime signals would be a challenging quest as the dominance of the astrophysical foreground from compact binary coalescences will mask the SCGWB. This idea to deal with the foreground signal and increase future gravitational-wave detector's sensitivity toward SCGWB searches led to the foundation of the work presented in the thesis. In this work, I study a subtraction-noise projection method with future ground-based detectors, which first subtracts the dominating foreground signals from compact binary coalescences and then makes it possible to reduce the residuals left after subtraction of a foreground of compact binary coalescences, significantly improving our chances to detect an SCGWB. The subtraction of the foreground signal depends upon the accurate parameter estimation of individual binary signals present in the detector data. I carried out the analysis using posterior sampling for the parameter estimation of binary black hole mergers. The removal of an astrophysical foreground in future detectors with potentially several $10^5$ signals per year will be computationally very challenging. I extended state-of-the-art Bayesian parameter estimation and detector modeling software (Bilby) to incorporate foreground reduction methods based on geometric interpretations of matched filtering. I also developed a numerical approach for the calculation of Fisher matrices that allows the implementation of arbitrary waveform models, i.e. not only the analytical ones. As a result, I demonstrate the sensitivity improvement of SCGWB searches and find that the ultimate sensitivity of SCGWB searches will not be limited due to residuals left after subtracting the estimated binary black holes foreground. However, the fraction of the astrophysical foreground that cannot be detected even with third-generation instruments, or the possible presence of other signals (non-deterministic astrophysical stochastic background signals) can further impact the sensitivity towards the SCGWB searches. To search for an isotropic SGWB and their statistical evidence in aLIGO and AdV data, I am involved in research activities with the stochastic group of LIGO VIRGO Collaboration contributing to data analysis and isotropic SGWB searches. The main objective of my contribution is, to analyze the whole third observation run data of aLIGO and AdV, and look for variations in the signal if present, i.e. coherence between the different baselines of aLIGO and AdV detectors. And then to use a stochastic energy density spectrum following a power-law distribution to check for SGWBs either of astrophysical or cosmological origin based on the power-law index, which characterizes the background signal and then helps to put upper limits on the respective SGWB. I, therefore, present the upper limits on isotropic SGWB searches from the third observation run of aLIGO and AdV detectors data. The limits on the isotropic SGWB from the third observation run stochastic data analysis improved by a significant factor in comparison to the upper limits from the first and the second observation run stochastic data analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

McCully, Curtis, Charles R. Keeton, Kenneth C. Wong e Ann I. Zabludoff. "Quantifying Environmental and Line-of-sight Effects in Models of Strong Gravitational Lens Systems". IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/623240.

Texto completo da fonte
Resumo:
Matter near a gravitational lens galaxy or projected along the line of sight (LOS) can affect strong lensing observables by more than contemporary measurement errors. We simulate lens fields with realistic threedimensional mass configurations (self-consistently including voids), and then fit mock lensing observables with increasingly complex lens models to quantify biases and uncertainties associated with different ways of treating the lens environment (ENV) and LOS. We identify the combination of mass, projected offset, and redshift that determines the importance of a perturbing galaxy for lensing. Foreground structures have a stronger effect on the lens potential than background structures, due to nonlinear effects in the foreground and downweighting in the background. There is dramatic variation in the net strength of ENV/LOS effects across different lens fields; modeling fields individually yields stronger priors for H-0 than ray tracing through N-body simulations. Models that ignore mass outside the lens yield poor fits and biased results. Adding external shear can account for tidal stretching from galaxies at redshifts z >= z(lens), but it requires corrections for external convergence and cannot reproduce nonlinear effects from foreground galaxies. Using the tidal approximation is reasonable for most perturbers as long as nonlinear redshift effects are included. Even then, the scatter in H0 is limited by the lens profile degeneracy. Asymmetric image configurations produced by highly elliptical lens galaxies are less sensitive to the lens profile degeneracy, so they offer appealing targets for precision lensing analyses in future surveys like LSST and Euclid.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Modest, Heike [Verfasser], e Gregor [Akademischer Betreuer] Morfill. "Nonlinear data analysis of the CMB : cosmological principles put to test / Heike Modest. Betreuer: Gregor Morfill". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1080122303/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Tsagas, Christos G. "Covariant and gauge-invariant analysis of cosmological perturbations in the presence of a primordial magnetic field". Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263210.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Rabuffo, Giovanni [Verfasser], e Benjamin [Akademischer Betreuer] Bahr. "The Renormalization Group Flow Analysis for a Cosmological Sector of Spin Foam Models / Giovanni Rabuffo ; Betreuer: Benjamin Bahr". Hamburg : Staats- und Universitätsbibliothek Hamburg, 2018. http://d-nb.info/1174306459/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Kjellqvist, Jimmy. "Foundation for an analysis of the dust of theNearby Universe". Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-438508.

Texto completo da fonte
Resumo:
The current cosmological paradigm of an accelerating cosmic expansion issupported by observations of Type Ia supernovae. However, the light emittedby these and other cosmological sources is not only redshifted by cosmicexpansion but will also interact with matter along the light path. Especiallyintergalactic dust can lead to additional reddening and dimming of distantsources due to light scattering or absorption. This yields systematiccontaminations to cosmological measurements. This project builds afoundation and some of the tools that will be used in a master’s thesis withthe aim of analysing the spatial distribution and the properties of this cosmicdust. While previous studies assumed cosmic dust to be homogeneouslydistributed, it is expected to follow the spatial distribution of galaxies fromwhich it was expelled. This project also starts to recreate previous models ofhomogeneous dust models and measurements which will be used the futuremaster’s thesis. An analysis of the methods and tools used, along with some ofthe dust properties, is also made in this project.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Shaw, Laurie Duncan. "Analysing bound structures in cosmological N-body simulations". Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614351.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Gallo, Stefano. "On galaxy cluster modelling in the context of cosmological analyses". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP075.

Texto completo da fonte
Resumo:
Les amas de galaxies sont les objets gravitationnellement liés les plus massifs de l'Univers. Ils se forment à partir des pics les plus élevés des champs de densité primordiaux et sont situés aux noeuds d'un réseau filamentaire complexe appelé "toile cosmique". Le nombre d'amas en fonction de la masse et du redshift est une sonde puissante pour contraindre les paramètres du modèle cosmologique. La comparaison entre l'observation des amas et les prédictions théoriques nécessite une modélisation précise de la population d'amas, qui doit tenir compte des caractéristiques observables des amas, ainsi que de leurs relations physiques avec des quantités cachées telles que la masse totale. L'utilisation d'un modèle imprécis peut entraîner des contraintes biaisées sur les paramètres cosmologiques. Avec la nouvelle génération de grands relevés d'amas, qui réduira considérablement les incertitudes statistiques des analyses cosmologiques, il devient crucial d'identifier et de réduire toutes les sources possibles de biais associés à une modélisation inexacte. Il est donc important d'améliorer notre compréhension des processus physiques ayant un impact sur les amas de galaxies, et de tester les impacts des hypothèses de modélisation sur les analyses cosmologiques. Dans cette thèse, j'ai abordé ces questions en me concentrant sur deux aspects : la caractérisation de la distribution de la matière dans les environnements d'amas, au-delà de l'hypothèse commune de symétrie sphérique ; et l'impact de l'hypothèse d'un modèle d'amas inexact dans le processus de détection, et son influence sur l'analyse cosmologique. Au sujet de la distribution de la matière dans et autour des amas, j'ai réalisé trois études, en me concentrant en particulier sur la composante gazeuse. Tout d'abord, j'ai étudié statistiquement la distribution azimutale de la matière dans un ensemble d'amas simulés, en quantifiant l'écart par rapport à la symétrie sphérique. J'ai montré que les caractéristiques azimutales du gaz sont fortement corrélées avec celles de la matière noire et avec les propriétés structurelles et physiques de l'amas, ainsi qu'avec le nombre de filaments connectés à l'amas. Deuxièmement, j'ai analysé la détectabilité des structures filamentaires dans la périphérie des amas de galaxies en utilisant des méthodes statistiques, basées sur des observations de rayons X et de galaxies de l'amas Abell 2744. Pour la première fois dans une analyse des seuls rayons X, j'ai identifié trois filaments connectés à l'amas. À partir de la distribution tridimensionnelle des galaxies, j'ai identifié deux structures filamentaires supplémentaires le long de la ligne de visée, à l'avant et à l'arrière de l'amas. Troisièmement, j'ai entraîné un modèle génératif pour produire des images d'amas de galaxies avec des morphologies réalistes, en évitant les coûts de calcul élevés des simulations cosmologiques. J'ai montré que les images produites par ce modèle présentent des morphologies anisotropes à grande échelle, offrant un réalisme amélioré par rapport aux images à symétrie sphérique générées de manière analytique. À petite échelle, les images générées par le modèle semblent plus lisses, plus sphériques et légèrement moins concentrées que les images d'entraînement, en moyenne. Cela peut empêcher leur utilisation à la place de simulations à haute résolution, mais elles peuvent être utiles pour améliorer le réalisme dans des applications à faible résolution. Dans la seconde approche, j'ai étudié l'effet du modèle d'amas dans la détection d'amas de galaxies avec la méthode du matched multi-filter dans le contexte de la mission Planck, en étudiant le cas où la population réelle d'amas diffère du modèle supposé dans le template de détection. J'ai montré que la forme du profil de l'amas a un fort impact sur la fonction de complétude, alors que l'effet des morphologies d'amas non sphériques est modéré, et que ces impacts affectent les contraintes cosmologiques, les déplaçant jusqu'à ~1σ
Galaxy clusters are the most massive gravitationally bound objects in the Universe. They form from the highest peaks in the primordial density fields, and are located at the nodes of a complex filamentary network called the cosmic web. The number of clusters as a function of mass and redshift, known as cluster number counts, has emerged as a powerful probe to constrain the parameters of the cosmological model. Comparing cluster observation with theoretical predictions requires accurate modelling of the cluster population, which needs to account for the clusters' observable characteristics, as well as their physical relationships with hidden quantities like the total mass. The use of an inaccurate cluster model can result in biased constraints on the cosmological parameters. With the new generation of large cluster surveys, which will significantly reduce the statistical uncertainties of cosmological analyses with galaxy clusters, it becomes crucial to identify and reduce all possible sources of biases associated with inaccurate modelling. It is therefore important to improve our understanding of the physical processes impacting galaxy clusters, and to test the possible impacts of simplifying modelling assumptions on the cosmological analyses. In this Thesis, I approached these issues focusing on two aspects: the characterisation of the matter distribution in cluster environments, beyond the common spherical symmetry assumption; and the impact of assuming an inaccurate cluster model in the cluster detection process, and its influence on the cosmological analysis. Concerning the matter distribution in and around clusters, I performed three studies, focusing in particular on the gas component. First, I investigated statistically the azimuthal distribution of matter in a set of simulated clusters, quantifying the departure from spherical symmetry. I showed that the gas azimuthal features are strongly correlated with the dark matter ones and with the cluster's structural and physical properties, as well as the number of filaments connected to the cluster. Second, I conducted a case study on the detectability of filamentary structures in the outskirts of galaxy clusters using statistical methods, based on X-ray and galaxy observations of the cluster Abell 2744. I combined the results of two techniques: the aperture multipole decomposition and the T-Rex filament finder. For the first time in a blind analysis of X-rays alone, I identified three filamentary structures connected to the cluster. From the three-dimensional distribution of galaxies, I identified two additional filamentary structures along the line of sight, in the front and in the back of the cluster. Third, I trained a generative model to produce images of galaxy clusters with realistic morphologies avoiding the high computational costs of cosmological simulations. I showed that the images produced by this model exhibit anisotropic large-scale morphologies, offering improved realism over spherically symmetric analytic generated images. At small scales, the model-generated images appear smoother, more spherical and slightly less concentrated than training images, on average. This may prevent the use of model-generated images in place of high-resolution simulations, but they may be useful to improve realism in low-resolution applications.In the second approach, I studied the effect of the cluster model in the detection of galaxy clusters with the matched multi-filter method in the context of the Planck mission, studying the case in which the real cluster population differs from the model assumed in the detection template. I showed that the shape of the cluster profile has a strong impact on the completeness function, while the effect of non-spherical cluster morphologies is moderate, and that these impacts affect the cosmological constraints, potentially shifting them by up to ~1σ
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Harper, Charles L. "On the nature of time in cosmological perspective : a comparative study of the weak and strong interaction chronometries via an analysis of high resolution ⁸⁷Rb-β-̄⁸⁷Sr, ²³⁵/²³⁸U-α-²⁰⁷/⁷⁰⁶Pb and ¹⁴²Sm-α-¹⁴³Nd isotopic age determinations of meteoritic, lunar and geological samples". Thesis, University of Oxford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.670359.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Trusov, Svyatoslav. "Exploitation des données du DESI Bright Galaxy Survey pour contraindre la nature de l’énergie noire et la relativité générale". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS296.

Texto completo da fonte
Resumo:
DESI (Dark Energy Spectroscopic Instrument, i.e. l'Instrument Spectroscopique de l'Énergie Noire) est un instrument spectroscopique attaché à un téléscope de 4m à l'observatoire Kitt Peak aux États-Unis. En collectant les spectres de plus de 40 millions de galaxies, on peut calculer leur distance radiale et ainsi créer une carte tri-dimensionnelle des grandes structures de notre Univers. Puisque la gravité est la force dominante à ces échelles, en mesurant la vitesse à laquelle la matière s'agglomère pour former des structures comme les galaxies, on peut contraindre les théories de la gravité. Cette thèse se concentre sur le relevé de galaxies brillantes (BGS), qui est l'échantillon de données le plus dense de DESI et qui est composé des galaxies les plus proches de nous (z < 0.4) et les plus lumineuses (avec une coupure en magnitude apparente dans la bande r à r < 19.5). Un problème avec une densité de galaxies aussi élevée à bas redshift est que l'information qu'on peut extraire d'un volume d'espace limité est aussi très limitée. Cela veut dire que les contraintes sur les paramètres cosmologiques ne sont pas dominées par la statistique de l'échantillon mais par une limite fondamentale liée à la quantité d'information accessible appelée 'variance cosmique'. Et pourtant, il y a une technique qui permet de contourner cet obstacle. On appelle cette technique l'analyse multi-traceur (ou "multi-tracer analysis" en anglais). Elle consiste à diviser le catalogue de données en deux (ou plusieurs) sous-catalogues différents avec des propriétés de clustering différentes et de prendre en compte les corrélations spatiales croisées entre les différents traceurs qui ajoutent une information supplémentaire, et permettent ainsi d'améliorer les contraintes sur certains paramètres cosmologiques. Un autre problème qui se pose à cause de la densité élevée du BGS est celui de l'estimation des erreurs de mesure au moyen d'une matrice de covariance. Habituellement, pour estimer la matrice de covariance, on crée des milliers de simulations cosmologiques qui doivent imiter la relevé, en créant ainsi plusieurs réalisations de différents univers observables et en nous permettant d'estimer les erreurs sur la mesure de statistiques de 2-point. Produire ce millier de simulations est très coûteux en temps de calcul et en mémoire. A ce jour, il n'en existe que 25, ce qui est insuffisant pour obtenir une matrice de covariance suffisamment précise. Dans cette thèse, nous proposons une nouvelle méthode, appelée FitCov, qui est une combinaison hybride entre la méthode d'estimation de covariance “jackknife” basée sur un échantillonnage des données, et la méthode classique basée sur des simulations.Un troisième problème que nous avons rencontré au cours de cette thèse concerne plus généralement les analyses clustering. Dans l'approche standard, on compresse l'information sur les paramètres cosmologiques contenue dans la statistique à deux-points sous la forme de paramètres intermédiaires tels que le taux de croissance des structures déjà mentionné et les paramètres géométriques d'Alcock-Paczynski qui permettent de mesurer le taux d'expansion de l'univers par exemple. Cette compression nous fait perdre de l'information cosmologique lorsque l'analyse des grandes structures, mais elle accélère significativement la détermination des paramètres cosmologiques. Dans cette thèse, nous présentons un moyen d'accélérer l'inférence des paramètres cosmologiques directement à partir de la statistique à deux points, sans passer par les paramètres compressés. Pour cela, nous avons développé un réseau de neurones qui remplace la partie du modèle analytique qui prend beaucoup de temps à prédire l'évolution non-linéaire de la statistique à deux-points et nous avons montré que notre modèle hybride est aussi précis que le modèle analytique
DESI (Dark Energy Spectroscopic Instrument) is a spectroscopic instrument installed on a 4m telescope at the Kitt Peak Observatory in the United States. By collecting the spectra of more than 40 million galaxies, we can calculate their radial distances and thus create a three-dimensional map of the major structures in our Universe. Since gravity is the dominant force at these scales, by measuring the speed at which matter clumps together to form structures such as galaxies, we can constrain theories of gravity. This thesis focuses on the Bright Galaxy Survey (BGS), which is DESI's densest data sample and is composed of the galaxies closest to us (z < 0.4) and the most luminous (with an apparent magnitude cut-off in the r band at r < 19.5). One problem with such a high density of galaxies at low redshift is that the information that can be extracted from a limited volume of space is also very limited. This means that the constraints on the cosmological parameters are not dominated by the statistics of the sample but by a fundamental limit linked to the amount of accessible information called the ‘cosmic variance'. However, there is a technique that allows us to get around this obstacle. This technique is known as multi-tracer analysis. It consists of dividing the data catalogue into two (or more) different sub-catalogues with different clustering properties and taking into account the spatial correlations between the different tracers, which add extra information and thus make it possible to improve the constraints on certain cosmological parameters. Another problem that arises because of the high density of the BGS is that of estimating measurement errors by means of a covariance matrix. Usually, to estimate the covariance matrix, we create thousands of cosmological simulations that must mimic the survey, thus creating several realisations of different observable universes and allowing us to estimate the errors in the measurement of 2-point statistics. Producing this thousand simulations is very costly in terms of computing time and memory. To date, only 25 such simulations exist, which is insufficient to obtain a sufficiently accurate covariance matrix. In this thesis, we propose a new method, called FitCov, which is a hybrid combination of the jackknife covariance estimation method based on data resampling and the classical method based on simulations. A third problem we encountered during the course of this thesis relates more generally to clustering analyses. In the standard approach, the information on the cosmological parameters contained in the two-point statistic is compressed in the form of intermediate parameters such as the growth rate of the structures already mentioned and the Alcock-Paczynski geometric parameters that allow us to measure the expansion rate of the universe, for example. This compression means that we lose cosmological information when analysing large structures, but it significantly speeds up the determination of cosmological parameters. In this thesis, we present a way of accelerating the inference of cosmological parameters directly from the two-point statistic, without going through the compressed parameters. To this end, we have developed a neural network that replaces the part of the analytical model that takes a long time to predict the non-linear evolution of the two-point statistic, and we have shown that our hybrid model is as accurate as the analytical model
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Loureiro, Arthur Eduardo da Mota. "Galaxy Power Spectrum Analysis: A Monte-Carlo Approach". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-06072015-060434/.

Texto completo da fonte
Resumo:
Many galaxy surveys are planned to release their data over the next few years. Each different survey has its own geometrical limitations, which reflects upon the data as a selection function the spatial distribution of certain types of galaxies. Given a galaxy map (real or mock), the main goal of this work is to obtain information about how the selection function affects some of the cosmological parameters which can be probed from large-scale structure. A Monte-Carlo Markov Chain method is proposed in order to probe the effects of considering the selection functions parameters as nuisance parameters. The method consists in combining realizations of simulated galaxy catalogs using theoretical matter power spectra, combined with an optimal power spectrum estimator method. Theory and data are then compared in a multivariate Gaussian representing the likelihood function. This Monte-Carlo method has proven robust and capable of probing selection function effects on the cosmological parameters, showing that the simple marginalization over the nuisance parameters might lead to wrong estimates on the cosmology. The method is applied to obtain forecasts for these effects on the upcoming J-PAS Luminous Red Galaxies data and is employed to obtain constraints on the Hubble parameter (H0), the dark matter density (c) and two parameters of the equation of state of dark energy (w0 and wa).
Nos próximos anos, diversos levantamentos de galáxias planejam lançar uma quantidade considerável de novos dados, marcando, assim, o início da chamda era da cosmologia de precisão. Cada levantamento possui suas próprias limitações geométricas, que manifestam- se perante os dados na forma de uma função de seleção, ou seja, uma distribuição espacial de cada tipo de galáxia. A partir de um mapa de galáxias (real ou simulado), o principal objetivo desse trabalho foi descobrir como a função de seleção afeta alguns dos parâmetros cosmológicos que podem ser obtidos através de dados futuros de estrutura em larga escala. Portanto, propôs-se um método de Monte-Carlo com cadeias de Markov para estudar os efeitos decorrentes da inclusão dos parâmetros da função de seleção como nuisance parameters. Esse método consiste em combinar simulações de catálogos de galáxias, usando um espectro de potências teórico da matéria junto com um estimador ótimo, a fim de obter ambos espectros (teórico e observacional) e compará-los em uma verossimilhança Gaussiana-multivariada. O método de Monte-Carlo provou-se robusto e capaz de demonstrar os efeitos da função de seleção sobre as estimativas dos parâmetros cosmológicos, comprovando que o simples ato de marginalizar sobre os parâmetros não desejados pode levar a estimativas equivocadas na cosmologia em quesão. Finalmente, esse método foi aplicado nas estimações do parâmetro de Hubble (H0), na densidade de matéria escura (c) e em dois dos parâmetros da equação de estado da energia escura (w0 e wa) com o objetivo de prever tais efeitos para dados futuros do levantamento J-PAS com Galáxias Vermelhas Luminosas.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Nevalainen, Jukka. "Determining cosmological parameters using X-ray analyses og clusters of galaxies and the cepheid period-luminosity relation". Helsinki : University of Helsinki, 2000. http://ethesis.helsinki.fi/julkaisut/mat/tahti/vk/nevalainen/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Gilberthorpe, Emma Louise. "The Fasu, Papua New Guinea : analysing modes of adaptation through cosmological systems in a context of petroleum extraction /". St. Lucia, Qld, 2003. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17527.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Chahine, Layal. "Chemical analysis of the bulge globular cluster NGC 6553". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18752/.

Texto completo da fonte
Resumo:
This work presents the analysis of a sample of 558 red giant stars located in the direction of the Bulge globular cluster NGC 6553 and observed with the multi-object spectrograph FLAMES at the Very Large Telescope of ESO. According to their metallicity and radial velocity, we identified a sample of 245 stars likely members of the cluster. This represents the largest spectroscopic sample collected so far for NGC 6553. The main goal was to obtain detailed abundances of NGC 6553 with the purpose of studying its chemical composition and investigate its relation with the formation of the Bulge and with other Bulge clusters. In particular, the study aimed at verifying the possible presence of iron spread, which would have revealed a complex nature of NGC 6553, in spite of its standard classification as a genuine globular cluster. Moreover, we investigated, the presence of chemical anomalies through the measure of the Mg and Al abundances that show large star-to-star variations in most of the Galactic clusters. The analysis has revealed that NGC 6553 has an iron abundance of [Fe/H] = −0.13 ± 0.01 dex with no significant internal spread. This result indicates that the cluster was not able to retain the supernovae ejecta in its potential well. The [Si/Fe] abundance ratio is enhanced with respect to the solar value, indicating that the cluster was formed from gas enriched by supernovae ejecta. The [Si/Fe] measured in NGC 6553 is compatible with that measured in other Bulge stars. Finally, NGC 6553 does not exhibit a Mg-Al anti-correlation and no significant spread in either Mg and Al. This suggests that the massive stars populating the cluster in its early stage did not reach in their interior burning temperatures higher than ∼ 70 million K.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Barchiesi, Luigi. "X-ray analysis and broad-band properties of [NeV]-selected type 2 AGN". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18749/.

Texto completo da fonte
Resumo:
The study of type 2 AGN is fundamental to the understanding of the evolution phases of AGN and of their influence on galaxy evolution and to shed light on the origin of the XRB emission at E>30 keV. The goal of my master thesis project is the study of 94 [NeV]-selected type 2 AGN at z = 1 in the COSMOS field, through the analysis of their X-ray spectra and optical-to-far-IR SED. The use of the new X-ray data from the Chandra COSMOS Legacy catalogue extends the X-ray coverage of the COSMOS field and provides a more uniform coverage. We also aim at studying the hosts of type 2 AGN to characterize their parameters, like stellar mass and SFR, and investigate whether these galaxies are different from ``normal'' galaxies due to the AGN influence. Our analysis showed that at least half of the sources are heavily obscured AGN and at least 4% can be classified as CT objects. These new data will be used to improve the estimate of the fraction of CT AGN at z=1 and their contribution to the XRB. The use of a SED fitting code allowed us to separate the AGN emission, in particular the mid-IR emission ascribed to the obscuring torus, that provided us with an estimate of the AGN bolometric power, from the galaxy emission. We used the latter to estimate the stellar mass of the galaxy and the SFR. We compared our results with those expected from the ``main sequence'' galaxies and found that our sources do not differ significantly from “normal” galaxies, hence the AGN has, so far, a limited impact on the host-galaxy SF. We found good correlations between the AGN 12um and 2 - 10 keV luminosities and between the AGN bolometric luminosities derived from the SED fitting and those from the rest-frame 2 - 10 keV intrinsic luminosities. These correlations can be used to estimate the AGN power of sources for which X-ray data are either not available or too shallow. The entire work underlines the importance of a multi-wavelength approach to properly constrain the properties of type 2 AGN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

La, Caria Marlis-Madeleine. "X-ray analysis of local mid-infrared selected Compton-thick AGN candidates". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Encontre o texto completo da fonte
Resumo:
Cosmic X-ray background synthesis models (Gilli 2007) require a significant fraction of obscured AGN, some of which are expected to be heavily obscured (Compton-thick), but the number density of observationally found obscured sources is still an open issue (Vignali 2010, 2014). This thesis work takes advantage of recent NuSTAR data and is based on a multiwavelength research approach. Gruppioni et al. 2016 compared the AGN bolometric luminosity, for a sample of local 12 micron Seyfert galaxies, derived from the SED decomposition to the same quantity obtained by the 2-10 keV luminosity (IPAC-NED). A difference up to two orders of magnitude resulted between these quantities for some sources. Thus, the intrinsic X-ray luminosity obtained correcting for the obscuration may be underestimated. In this thesis we have tested this hypothesis by re-analysing the X-ray spectra of three of the sources (UGC05101, NGC1194 and NGC3079), for which observations from NuSTAR and Chandra and/or XMM-Newton were available. This is meant to extend our analysis to energies above 10 keV and thus estimate the AGN column density as reliable as possible. For spectral fitting we made use of both the commonly used XSPEC package and the two very recent MYtorus and BNtorus physical models. The available wide bandpass allowed us to achieve new and more solid insights into the X-ray spectral properties of these sources. The measured absorption column densities are highly suggestive of heavy obscuration. Once corrected the X-ray AGN luminosity for the obscuration estimated through our spectral analysis, we compared the L(X) values in the 2-10 keV band with those derived from the MIR band, by means of the relation by Gandhi, 2009. As expected, the values derived from this relation are in good agreement with those we measured, indicating that the column densities were underestimated in the previous literature works.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Arici, Ilenia. "Lensing analysis of the clusters in the Three Hundred Project". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23411/.

Texto completo da fonte
Resumo:
We discuss the lensing analysis of a large sample of numerically simulated galaxy clusters from the Three Hundred project. The main motivation is to build a database containing maps and other data usable in a vast number of applications, from testing the several predictions of the ΛCDM model to the calibration of cluster mass proxies. The dataset consists of the 324 most massive cluster-scale halos identified in the MultiDark cosmological simulation. In the framework of the 300 project, each halo was resimulated at high resolution using a zoom-in technique including full baryonic physics. We consider clusters in 9 simulation snapshots corresponding to z=0.116-0.942. Each cluster was projected along three lines of sight. We produced maps of the lensing potential, deflection angles, convergence, shear and determinant of the lensing Jacobian. Two kinds of maps were produced: some higher resolution maps focused on the more central cluster strong lensing regions, and other maps with a wider field of view more suitable to study the weak lensing regime. In addition, we computed critical lines and caustics for several source z and measure strong lensing cross sections and projected weak-lensing masses and concentrations. We consider some practical applications of the data. We show that the cross sections for multiple images and for GGSL are growing functions of cluster properties such as the mass, the concentration and the Einstein radius. We show that using the clusters in the Three Hundred project, we reproduce the results of Meneghetti et al. (2020), who showed that cluster halos simulated in the context of the ΛCDM model produce about an order of magnitude less GGSL events than observed in massive galaxy clusters. We illustrate the origin of the concentration bias of strong-lensing selected galaxy clusters. We discuss an experiment, involving the creation of mock shear catalogs, aimed at calibrating the relation between true and weak lensing masses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Casavecchia, Benedetta. "Studying synthetic column density maps and absorption spectra from galactic wind models". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25361/.

Texto completo da fonte
Resumo:
Galactic winds are multi-phase outflows that probe how feedback regulates the mass and metallicity of galaxies. Their cold phase, mainly observable with absorption lines, is often detected hundreds to thousands of pc away from the galactic plane and with velocities of hundreds of km $\rm s^{-1}$. To understand observations, it is important to theoretically study how such lines are produced via numerical simulations of cloud systems exposed to winds and starburst UV backgrounds. In this thesis we study the thermodynamics, ion populations, and ion absorption lines of cold and warm radiative clouds evolving from magnetised wind-cloud systems and an unmagnetised shock-multicloud model. We account for radiative cooling with two different cooling floors and magnetic fields with two different orientations. In our wind-cloud simulations, cold clouds survive the interaction with the wind for longer, since they are less exposed to instabilities, than warmer clouds. Magnetic fields have a larger influence on warm clouds than in cold clouds. If transverse to the wind direction, the field creates a shield that confines the expansion of the cloud, delaying its evaporation. In our shock-multicloud simulation, cold gas at large distances is not accelerated by ram-pressure, but, instead, precipitates from mixed gas out of thermal equilibrium. To study ion populations and create synthetic spectra, we developed a flexible python interface to link our PLUTO simulations to TRIDENT via the YT-package infrastructure and CLOUDY. Our ion population analysis reveals that setting different cooling floors and magnetic fields affect the column densities of several ions. H\,{\sc i}, O\,{\sc vi}, Mg\,{\sc ii}, C\,{\sc iii}, and Si\,{\sc iv} are more sensitive to the cooling floors, and H\,{\sc i}, Mg\,{\sc ii}, C\, {\sc iii}, and Si\, {\sc iv} can also trace the initial magnetic field direction, making them good candidates for comparisons with observations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Spinelli, Claudia. "A deep learning approach to the weak lensing analysis of galaxy clusters". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Encontre o texto completo da fonte
Resumo:
In the present work we propose an innovative method which can potentially allow the unbiased estimate of galaxy cluster structural parameters from weak gravitational lensing maps using deep learning techniques. This method represents a viable alternative to more classical and time consuming approaches which are not ideal when dealing with large datasets. We readapted the Inception-v4 architecture to our purpose and implemented it in Pytorch. This model is trained with labeled data and then applied to unlabeled data in order to predict from the input maps, for each cluster, the virial mass, the concentration, the number of substructures and the mass fraction in substructures. The determination of these quantities is particular important because of their possible applications in several cosmological tests. The training and test sets consist of maps produced with the MOKA software, which generates semi-analytical mass distributions of galaxy clusters and computes convergence and reduced shear maps. The simulated halos are placed at different redshifts, i.e. z = 0.25, 0.5, 0.75, 1. The complexity and the level of realism of the simulations has been increased while performing a sequence of experiments. We train the model on noiseless convergence maps, which are then replaced with noiseless reduced shear maps. Finally, the same model is trained using reduced shear maps which include shape noise for a given number density of lensed galaxies. We find that our model produces more accurate and precise measurements of the virial masses and concentrations compared to the standard approach of fitting the convergence profile. It is able to learn information about the triaxial shape of the clusters during the training phase. Consequently, well known biases due to projection effects and substructures are strongly mitigated. Even when a typical observational noise is added to the maps, the network is capable to measure the cluster structural parameters well.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Alberi, Matteo. "La PCA per la riduzione dei dati di SPHERE IFS". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6563/.

Texto completo da fonte
Resumo:
Nel primo capitolo di questa tesi viene presentata una panoramica dei principali metodi di rivelazione degli esopianeti: il metodo della Velocità Radiale, il metodo Astrometrico, il metodo del Pulsar Timing, il metodo del Transito, il metodo del Microlensing ed infine il metodo del Direct Imaging che verrà approfondito nei capitoli successivi. Nel secondo capitolo vengono presentati i principi della diffrazione, viene mostrato come attenuare la luce stellare con l'uso del coronografo; vengono descritti i fenomeni di aberrazione della luce provocati dalla strumentazione e dagli effetti distorsivi dell'atmosfera che originano le cosiddette speckle; vengono poi presentate le moderne soluzioni tecniche come l'ottica attiva e adattiva, che hanno permesso un considerevole miglioramento della qualità delle osservazioni. Nel terzo capitolo sono illustrate le tecniche di Differential Imaging che permettono di rimuovere efficacemente le speckle e di migliorare il contrasto delle immagini. Nel quarto viene presentata una descrizione matematica della Principal Component Analysis (Analisi delle Componenti Principali), il metodo statistico utilizzato per la riduzione dei dati astronomici. Il quinto capitolo è dedicato a SPHERE, lo strumento progettato per il Very Large Telescope (VLT), in particolare viene descritto il suo spettrografo IFS con il quale sono stati ottenuti, nella fase di test, i dati analizzati nel lavoro di tesi. Nel sesto capitolo vengono mostrate le procedure di riduzione dati e l'applicazione dell'algoritmo di IDL LA_SVD che applica la Principal Component Analysis e ha permesso, analogamente ai metodi di Differenzial Imaging visti in precedenza, di rimuovere le speckle e migliorare il contrasto delle immagini. Nella parte conclusiva, vengono discussi i risultati.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Seppi, Riccardo. "Simulating mock observations for weak lensing analysis and galaxy clusters mass estimates". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18760/.

Texto completo da fonte
Resumo:
The purpose of this thesis is learning how to accomplish a weak lensing analysis and estimate the mass of a galaxy cluster. For this task, I performed simulations in order to obtain mock observations with the Subaru Telescope of galaxies distorted by a massive galaxy cluster. Their distortion was quantified according to the Kaiser-Squires-Broadhurst (KSB) method. Chapter 1 is an introduction to crucial concepts regarding this thesis, in the fields of cosmology, galaxy clusters and gravitational lensing. Chapter 2 discusses the process of the simulation, its inputs and outputs and the analysis of the distortion of galaxy shapes. Chapter 3 regards the results of the analysis on the real galaxy cluster MACS J1206, the comparison with its model and how to get its mass by means of a fit with a Navarro-Frenk-White (NFW) mass distribution. Chapter 4 reports the application of the same analysis to MACS J1206 observed in different filters and two simulated clusters: Ares and Hera. Chapter 5 sums up the key points of the whole work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Bartolini, Vieri. "Multiwavelength analysis of the radio source 3C111 through VLBA observations". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25356/.

Texto completo da fonte
Resumo:
This thesis will present the results of the analysis of multi-wavelength observations of the active galaxy 3C111, obtained with the Very Long Baseline Array (VLBA). The observations were made at 5, 8.5, 15, 21, 43, and 86 GHz. Radio emission from active galaxies (AGNs) is one of the most interesting and information-rich phenomena among those accessible using the VLBI technique. AGNs are excellent laboratories for a variety of extreme physical processes. The standard model of AGNs suggests the presence of a supermassive black hole (SMBH) at the center of the galaxy, which accretes matter and is surrounded by obscuring torus. The complex dynamics around the SMBH and the presence of intense magnetic fields, may lead to the production of two plasma streams of charged particles, emitting synchrotron radiation, called jets with opposite orientation to each other. Given its proximity, 3C111 is an excellent object to study the physics of jets with mm-VLBI.The data gathered with the VLBA are calibrated, yielding source images with the highest resolution possible for this 86 GHz array. In addition to the total intensity images, polarization maps are also calculated for all bands. By comparing the total intensity images, spectral index maps can be calculated for all adjacent frequency pairs. Moreover, for some triplets of frequencies it is also possible to compute rotation measure (RM) maps. In order to provide relationships between distinct features in the jet, the spatial distribution of the emission in each band is modeled with a set of circular Gaussian components. For each of them, it is provided an estimate of the brightness temperature and the magnetic field. For each of the physical quantities found during the model fitting of the components, a relation is provided that is more or less accurate, depending on the parameters considered.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Giuri, Chiara. "The time resolved Ep-L correlation in GRBs: characterization and implications". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14082/.

Texto completo da fonte
Resumo:
Our understanding of the GRB phenomenon is still affected by several open issues including the prompt emission physics, the problem related to the trigger of the GRB activity, the classification of different classes of GRBs and the degree of collimation of their emission. All of these issues have strong implications for a full comprehension of the nature of GRBs progenitors. In addition, their huge luminosity and their redshift distribution up to the early Universe make GRBs very promising as cosmological tools. The most investigated way for using them to measure cosmological parameters and the properties and evolution of dark energy is the correlation between the radiated energy and the photon energy at which the spectrum peaks Ep. This work aims at a further investigation of the time-resolved Ep - L correlation, based on large data sets from the BATSE/CGRO and Fermi/GBM experiments, and its application to cosmology. This is an innovative approach leading to a significant improvement in the estimates of cosmological parameters w/r to previous time integrated analysis. Indeed, in time-resolved analysis we can use the correlation between flux and Ep existing in individual GRBs allowing a cosmology independent and unbiased calibration of the slope and of the extra-statistical scatter of the correlation. The results of my thesis also provide important inputs for discriminating among different emission models of the prompt emission of GRBs and the comparison between the dispersion of the time-resolved correlation and that obtained through the averaged spectra provides an estimate of the dispersion of jet opening angles, with strong implications regarding the nature of GRB progenitors. Finally, my research activity includes also the fit of time-resolved spectra of some GRBs with both empirical models and a physical Comptonization model recently delevoped and simulations of the expected contribution by spectral measurements of GRBs through the Chinese satellite HXMT.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Colla, Saverio Francesco. "Identification and analysis of super-bubbles candidates in Milky Way-like galaxy simulations". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20876/.

Texto completo da fonte
Resumo:
Questo lavoro di tesi presenta uno studio sull'identificazione e l’analisi delle principali proprietà di strutture di gas sottodenso che potrebbero essere prodotte da esplosioni multiple di supernove, le cosiddette “super-bubbles”. Tali strutture sono generate dalla simulazione numerica di una galassia simile alla Via Lattea eseguita tramite il modello SMUGGLE che implementa i principali meccanismi fisici alla base dei processi di formazione galattica. Viene proposto un metodo di identificazione basato su considerazioni statistiche che permette di stimare le proprietà (come raggio, tasso di espansione, età, e massa spazzata) delle strutture candidate ad essere possibili super-bubbles e tale approccio viene applicato sia all'intero campione di strutture identificate sia ad un sotto-campione di cinque candidati. Inoltre il presente lavoro di tesi indaga la capacita del modello SMUGGLE di stabilire una relazione tra il numero di supernovae esplose all'interno di una super-bubble e le sue dimensioni, sia in maniera statistica sia considerando singolarmente alcuni particolari strutture. Infine i risultati ottenuti sono stati confrontati sia con i dati osservativi della Via Lattea ed altre galassie a spirale sia con la teoria sviluppata per l'evoluzione delle super-bubbles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Trias, Cornellana Miquel. "Gravitational wave observation of compact binaries Detection, parameter estimation and template accuracy". Doctoral thesis, Universitat de les Illes Balears, 2011. http://hdl.handle.net/10803/37402.

Texto completo da fonte
Resumo:
La tesi tracta, des del punt de vista de l’anàlisi de dades, la possibilitat de detecció directa d’ones gravitatòries emeses per sistemes binaris d’objectes compactes de massa similar: forats negres, estels de neutrons, nanes blanques. En els capítols introductoris, a) es dóna una descripció detallada i exhaustiva de com passar dels patrons d’ona teòrics a la senyal detectada; b) s’introdueixen les eines més emprades en l’anàlisi de dades d’ones gravitatòries, amb especial menció a la discussió sobre les amplituds efectiva i característica. A més, els resultats originals de la tesi segueixen tres línies de recerca diferents: 1) S’ha predit la precisió amb la que el futur detector interferomètric espacial LISA, estimarà els paràmetres (posició, masses, velocitat de rotació, paràmetres cosmològics…) de les observacions de xocs entre dos forats negres supermassius en la fase “inspiral”. 2) S’ha desenvolupat un algorisme propi de cerca de senyals gravitatòries procedents de sistemes binaris estel•lars, basat en teories de probabilitat Bayesiana i MCMC. Aquest algorisme distingeix alhora milers de senyals superposades en una única sèrie temporal de dades, extraient paràmetres individuals de cadascuna d’elles. 3) S’ha definit de manera matemàtica rigorosa com determinar el rang de validesa (per a extracció de paràmetres i detecció) de models aproximats de patrons d’ones gravitatòries, aplicant-ho a un cas concret de models semi-analítics
La tesis trata, desde el punto de vista del análisis de datos, la posibilidad de detección directa de ondas gravitacionales emitidas por sistemas binarios de objetos compactos de masa similar: agujeros negros, estrellas de neutrones, enanas blancas. En los capítulos introductorios, a) se desarrolla una descripción detallada y exhaustiva de como pasar de los patrones de onda teóricos a la señal detectada; b) se introducen las herramientas más utilizadas en el análisis de datos de ondas gravitacionales, con especial mención a la discusión sobre las amplitudes efectiva y característica. Además, los resultados originales de la tesis siguen tres líneas de investigación diferentes: 1) Se ha predicho la precisión con la que el futuro detector interferométrico espacial LISA, estimará los parámetros (posición, masas, velocidad de rotación, parámetros cosmológicos…) de las observaciones de choques entre dos agujeros negros supermasivos en la fase “inspiral”. 2) Se ha desarrollado un algoritmo propio de búsqueda de señales gravitacionales procedentes de sistemas binarios estelares, basado en teorías de probabilidad Bayesiana y MCMC. Este algoritmo distingue a la vez miles de señales superpuestas en una única serie temporal de datos, extrayendo parámetros individuales de cada una de ellas. 3) Se ha definido de manera matemática rigurosa como determinar el rango de validez (para extracción de parámetros y detección) de modelos aproximados de patrones de ondas gravitacionales, aplicándolo a un caso concreto de modelos semi-analíticos.
In this PhD thesis one studies, from the data analysis perspective, the possibility of direct detection of gravitational waves emitted by similar mass compact binary objects: black holes, neutron stars, white dwarfs. In the introductory chapters, a) a detailed and exhaustive description about how to derive the detected strain from the theoretical emitted waveform predictions is given; b) the most used gravitational wave data analysis results are derived, being worth pointing out the discussion about effective and characteristic amplitudes. Moreover, three different research lines have been followed in the thesis: 1) It has been predicted the parameter estimation (position, masses, spin, cosmological parameters…) of supermassive black hole binary inspiral signals, observed with the future interferometric space detector, LISA. 2) A new algorithm, based on Bayesian probability and MCMC techniques, has been developed in order to search for gravitational wave signals from stellar-mass binary systems. The algorithm is able to distinguish thousands of overlapping signals from a single observed time series, allowing for individual parameter extraction. 3) It has been, mathematically and rigorously, defined how to compute the validity range (for parameter estimation and detection purposes) of approximated gravitational waveform models, applying it to the particular case of closed-form models
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Di, Piano Ambra. "Detection of short Gamma-Ray Bursts with CTA through real-time analysis". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19962/.

Texto completo da fonte
Resumo:
With respect to the current IACTs, CTA will cover a larger energy range (~20 GeV - 300 TeV) with one order of magnitude better sensitivity. The facility will be provided with a real-time analysis (RTA) software that will automatically generate science alerts and analyse data from on-going observations in real-time. The RTA will play a key role in the search and follow-up of transients from external alerts (i.e. from on-space gamma-ray missions, observatories operating at other energy bands or targets of opportunity provided by neutrinos and gravitational waves detectors). The scope of this study was to investigate the ctools software package feasibility for the RTA, adopting a full-field of view maximum likelihood analysis method. A prototype for the RTA was developed, with natively implemented utilities where required. Its performance was extensively tested for very-short exposure times (far below the lower limit of current Cherenkov science) accounting for sensitivity degradation due to the non-optimal working condition expected of the RTA. The latest IRFs, provided by CTA Performance, were degraded via effective area reduction for this purpose. The reliability of the analysis methods was tested by means of the verification of Wilks' theorem. Through statistical studies on the pipeline parameter space (i.e. minimum required exposure time), the performance was evaluated in terms of localization precision, detection significance and detection rates at short-timescales using the latest available GRB afterglow templates for the source simulation. Future improvements involve further tests (i.e. with an updated population synthesis) as well as post-trials correction of the detection significance. Moreover, implementations allowing the pipeline to dynamically adapt to a range of science cases are required. Prospects of forthcoming collaboration may involve the integration of this pipeline within the on-going work of the gamma-ray bursts experts of CTA Consortium.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Costanzo, Deborah. "Probing the accretion/ejection flows in AGN by characterizing Fe K emission/absorption line variability with residual maps". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16361/.

Texto completo da fonte
Resumo:
La dinamica e la geometria del materiale nelle vicinanze di un SMBH negli AGN sono ancora incerte, sia per quanto riguarda gli inflow sia per gli outflow, che possono essere responsabili del feedback da AGN sulla galassia ospite. L'analisi spettrale temporalmente risolta è fondamentale per studiare questi fenomeni e la banda energetica 4-10 keV è quella più adatta, in quanto vi si trova la riga di fluorescenza del ferro a 6.4 keV e, possibilmente, righe risonanti in assorbimento del ferro blueshiftate, indicative di venti da disco ultra-veloci. L'analisi adottata in questo lavoro di tesi si basa sulla tecnica dell'Excess Map, introdotta da Iwasawa et al. (2004). Lunghe esposizioni sono sezionate in intervalli temporali e ogni spettro risultante viene fittato con un semplice modello di continuo. I residui positivi sono quindi utilizzati per costruire un'immagine nel piano tempo vs. energia al fine di massimizzare la rivelazione di possibili strutture spettrali e derivarne l'evoluzione temporale. Laddove nelle Excess Map solo i residui positivi venivano considerati, tramite la nostra tecnica, denominata Residual Map, anche i residui negativi, legati a possibili strutture in assorbimento, vengono individuati e analizzati. Nel lavoro si è selezionato un campione di galassie di Seyfert 1 osservate con il satellite XMM-Newton, che garantisce la massima sensibilità nell'intervallo spettrale di interesse. Dal catalogo 3XMM-DR7 si sono scelte le sorgenti più brillanti nelle bande di interesse, NGC3783 e Mrk 509, e IRAS13224-3809, ossia quella con l'osservazione più lunga al momento disponibile, ma ad un livello di flusso inferiore. Nel lavoro ci si è focalizzati su un'osservazione particolarmente interessante tra quelle relative a NGC3783; in questa la riga del Ferro K_alpha mostra quattro picchi intervallati da circa 5-10 ks, corrispondenti ai tempi orbitali di un hot spot co-ruotante con il disco di accrescimento a distanze di circa 9-30 Rg.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Billi, Matteo. "Joint temperature and polarisation analyses of the lack of power anomaly in the CMB anisotropy pattern". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16205/.

Texto completo da fonte
Resumo:
La Radiazione Cosmica di Fondo (CMB) è la radiazione emessa nell’Universo primordiale dopo la ricombinazione di elettroni e protoni in idrogeno neutro. Le osservazioni della CMB negli ultimi 30 anni hanno fortemente contribuito alla nascita della cosmologia di precisione, e all’affermazione di un modello cosmologico standard, denonimato ΛCDM, i cui parametri sono stimati con un’incertezza dell’ordine del percento o addirittura inferiore. Ciò nonostante ci sono caratteristiche non ben comprese osservate alle grandi scali angolari della mappa in temperatura di CMB, note come anomalie. Una di queste, la mancanza di potenza rispetto a quanto previsto nel modello ΛCDM, potrebbe indicare l’esistenza di una nuova fase cosmologica antecedente all’epoca inflazionaria. Tale anomalia, osservata in modo consistente sia da WMAP che da Planck, non possiede però la significatività statistica necessaria per affermare l’esistenza di tale nuova fase. Al fine di studiare questa mancanza di potenza abbiamo utilizzato diversi estimatori statistici, che includono nell’analisi sia le mappe in temperatura che quelle in polarizzazione. Estimatori specifici ed innovativi, basati sullo spettro di potenza angolare della CMB, sono stati costruiti, testati e utilizzati su simulazioni e sui dati ottenuti dal satellite Planck nel 2015. Il confronto tra le simulazioni e i dati è stato valutato fornendo la percentuale di consistenza. Sono state inoltre fornite previsioni sulla sensibilità degli estimatori proposti quando impiegati su future osservazioni di CMB. Il miglioramento trovato può arrivare a un fattore dell’ordine 30, dimostrando che le misure future di polarizzazione della CMB potranno aiutare a trasformare una anomalia, attualmente osservata solo in temperatura, in una rilevazione di un nuovo fenomeno fisico.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Souza, José Cleriston Campos de. "Análise geométrica e dinâmica de modelos de gravidade generalizada". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-09052008-100444/.

Texto completo da fonte
Resumo:
Este trabalho teve por objetivo investigar alguns aspectos dinâmicos de modelos de gravidade generalizada escalares-tensoriais e f(R), que pretendem resolver de modo mais natural o problema da existência da energia escura, que seria a componente do Universo responsável por sua expansão acelerada. Num espaço-tempo de Friedmann-Lemaître-Robertson-Walker com curvatura espacial nula foi possível escrever as equações de movimento de forma a se obter um sistema dinâmico com um número menor de variáveis e cujo espaço de fase foi estudado genericamente e esboçado para alguns modelos em particular. Em seguida, as regiões dinamicamente proibidas e os pontos fixos do espaço de fase foram analisados. Para os modelos f(R), apresentamos Lagrangianas e Hamiltonianas efetivas e deduzimos uma expressão geral para o parâmetro de equação de estado w. Discutimos ainda a equivalência entre os modelos f(R) e os escalares-tensoriais. Por fim, introduzimos o Princípio de Maupertuis-Jacobi, que permite relacionar a Lagrangiana de um sistema mecânico a uma métrica numa determinada variedade Riemanniana, para determinar singularidades que podem surgir nos modelos f(R), tanto numa métrica isotrópica como numa anisotrópica do tipo mais simples (Bianchi tipo I). Encontramos, de maneira mais direta, as mesmas singularidades já conhecidas através de métodos de análise dinâmica.
This work aims the investigation of some dynamical aspects of generalized gravity models, namely scalar-tensor and f(R) models. These models intend to solve in a more natural way the problem of the existence of the dark energy, which is supposedly the component of the Universe that causes its accelerated expansion. In a null spatial curvature Friedmann-Lemaître-Robertson-Walker spacetime, it has been possible to write the equations of movement in a fashion that allowed us to obtain a dynamical system with a reduced number of variables, whose phase space has been generically studied and depicted for some particular models. In sequence, the dynamically forbidden regions and the fixed points of the phase space have been analyzed. For f(R) models, we have presented effective Lagrangians and Hamiltonians and derived a general expression for the equation of state parameter w. Furthermore, we have discussed the equivalence between f(R) and scalar-tensor models. Finally, we have introduced the Maupertuis-Jacobi Principle, which allows one to relate the Lagrangian for a mechanical system to a metric in a certain Riemannian manifold, to determine singularities which may appear in f(R) models, in an isotropic metric as well as in an anisotropic one of the simplest kind (Bianchi type I). We have found, in a more direct way, the same singularities that arise by using dynamical analysis methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Traina, Alberto. "On the X-ray properties of heavily obscured AGN in the backyard". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21440/.

Texto completo da fonte
Resumo:
One of the main goals of extragalactic astrophysics is to achieve complete knowledge of the processes which produce the observed emission from AGN. AGN classification in the X-rays divides these sources into two types: unobscured and obscured. Their emission explain most of the CXB emission. The fraction of resolved CXB at its peak is only about 30%. From models of the CXB, it is possible to have indications on the nature of sources which are ‘missing’ in the current census of CXB ‘constituents’: CT-AGN must be responsible for ∼35% of the peak CXB emission. Observations show that the fraction of CT-AGN is 10-15%; in order to fill this gap, a census of obscured AGN is needed. X-ray observations provide a characterization of the inner regions of the AGN. In this perspective, we present the X-ray spectral analysis of two low-redshift Compton-thick AGN candidates, NGC 3081 and ESO 565-G019, belonging to a larger sample of 57 candidate CT-AGN. We select these two AGN because of the high photon statistics, given by proprietary XMM-Newton and simultaneous NuSTAR data, and the presence of Chandra data. We have analyzed the two spectra using simple phenomenological spectral models and physically motivated models (MYTorus and borus02), in order to better characterize the intrinsic physical and geometric properties of the obscuring torus, as well as its nature. The main goal of this thesis is to measure the column density of the two sources under investigation, as well as the other spectral properties, to find out if they belong to the Compton-thick regime or they are Compton-thin AGN. We found that both AGN have an obscuring structure which is characterized, on average, by a Compton-thick material. Finally, it is important to underline how this type of application is also preparatory to future planned missions like Athena (∼2030) and the AXIS probe (mission under study) and can lead us to a better understanding of the properties of the torus and of the CXB components.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia