Academic literature on the topic 'Photometric gaussian mixtures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Photometric gaussian mixtures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Photometric gaussian mixtures":

1

Crombez, Nathan, El Mustapha Mouaddib, Guillaume Caron, and Francois Chaumette. "Visual Servoing With Photometric Gaussian Mixtures as Dense Features." IEEE Transactions on Robotics 35, no. 1 (February 2019): 49–63. http://dx.doi.org/10.1109/tro.2018.2876765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hatfield, P. W., I. A. Almosallam, M. J. Jarvis, N. Adams, R. A. A. Bowler, Z. Gomes, S. J. Roberts, and C. Schreiber. "Augmenting machine learning photometric redshifts with Gaussian mixture models." Monthly Notices of the Royal Astronomical Society 498, no. 4 (September 11, 2020): 5498–510. http://dx.doi.org/10.1093/mnras/staa2741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT Wide-area imaging surveys are one of the key ways of advancing our understanding of cosmology, galaxy formation physics, and the large-scale structure of the Universe in the coming years. These surveys typically require calculating redshifts for huge numbers (hundreds of millions to billions) of galaxies – almost all of which must be derived from photometry rather than spectroscopy. In this paper, we investigate how using statistical models to understand the populations that make up the colour–magnitude distribution of galaxies can be combined with machine learning photometric redshift codes to improve redshift estimates. In particular, we combine the use of Gaussian mixture models with the high-performing machine-learning photo-z algorithm GPz and show that modelling and accounting for the different colour–magnitude distributions of training and test data separately can give improved redshift estimates, reduce the bias on estimates by up to a half, and speed up the run-time of the algorithm. These methods are illustrated using data from deep optical and near-infrared data in two separate deep fields, where training and test data of different colour–magnitude distributions are constructed from the galaxies with known spectroscopic redshifts, derived from several heterogeneous surveys.
3

Jones, Daniel M., and Alan F. Heavens. "Gaussian mixture models for blended photometric redshifts." Monthly Notices of the Royal Astronomical Society 490, no. 3 (September 27, 2019): 3966–86. http://dx.doi.org/10.1093/mnras/stz2687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT Future cosmological galaxy surveys such as the Large Synoptic Survey Telescope (LSST) will photometrically observe very large numbers of galaxies. Without spectroscopy, the redshifts required for the analysis of these data will need to be inferred using photometric redshift techniques that are scalable to large sample sizes. The high number density of sources will also mean that around half are blended. We present a Bayesian photometric redshift method for blended sources that uses Gaussian mixture models to learn the joint flux–redshift distribution from a set of unblended training galaxies, and Bayesian model comparison to infer the number of galaxies comprising a blended source. The use of Gaussian mixture models renders both of these applications computationally efficient and therefore suitable for upcoming galaxy surveys.
4

Ansari, Zoe, Adriano Agnello, and Christa Gall. "Mixture models for photometric redshifts." Astronomy & Astrophysics 650 (June 2021): A90. http://dx.doi.org/10.1051/0004-6361/202039675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context. Determining photometric redshifts (photo-zs) of extragalactic sources to a high accuracy is paramount to measure distances in wide-field cosmological experiments. With only photometric information at hand, photo-zs are prone to systematic uncertainties in the intervening extinction and the unknown underlying spectral-energy distribution of different astrophysical sources, leading to degeneracies in the modern machine learning algorithm that impacts the level of accuracy for photo-z estimates. Aims. Here, we aim to resolve these model degeneracies and obtain a clear separation between intrinsic physical properties of astrophysical sources and extrinsic systematics. Furthermore, we aim to have meaningful estimates of the full photo-z probability distribution, and their uncertainties. Methods. We performed a probabilistic photo-z determination using mixture density networks (MDN). The training data set is composed of optical (griz photometric bands) point-spread-function and model magnitudes and extinction measurements from the SDSS-DR15 and WISE mid-infrared (3.4 μm and 4.6 μm) model magnitudes. We used infinite Gaussian mixture models to classify the objects in our data set as stars, galaxies, or quasars, and to determine the number of MDN components to achieve optimal performance. Results. The fraction of objects that are correctly split into the main classes of stars, galaxies, and quasars is 94%. Furthermore, our method improves the bias of photometric redshift estimation (i.e., the mean Δz = (zp − zs)/(1 + zs)) by one order of magnitude compared to the SDSS photo-z, and it decreases the fraction of 3σ outliers (i.e., 3 × rms(Δz) < Δz). The relative, root-mean-square systematic uncertainty in our resulting photo-zs is down to 1.7% for benchmark samples of low-redshift galaxies (zs < 0.5). Conclusions. We have demonstrated the feasibility of machine-learning-based methods that produce full probability distributions for photo-z estimates with a performance that is competitive with state-of-the art techniques. Our method can be applied to wide-field surveys where extinction can vary significantly across the sky and with sparse spectroscopic calibration samples. The code is publicly available.
5

Wagenveld, J. D., A. Saxena, K. J. Duncan, H. J. A. Röttgering, and M. Zhang. "Revealing new high-redshift quasar populations through Gaussian mixture model selection." Astronomy & Astrophysics 660 (April 2022): A22. http://dx.doi.org/10.1051/0004-6361/202142445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present a novel method for identifying candidate high-redshift quasars (HzQs; z ≳ 5.5) –which are unique probes of supermassive black hole growth in the early Universe– from large-area optical and infrared photometric surveys. Using Gaussian mixture models to construct likelihoods and incorporating informed priors based on population statistics, our method uses a Bayesian framework to assign posterior probabilities that differentiate between HzQs and contaminating sources. We additionally include deep radio data to obtain informed priors. Using existing HzQ data in the literature, we set a posterior threshold that accepts ∼90% of known HzQs while rejecting > 99% of contaminants such as dwarf stars or lower redshift galaxies. Running the probability selection on test samples of simulated HzQs and contaminants, we find that the efficacy of the probability method is higher than traditional colour cuts, decreasing the fraction of accepted contaminants by 86% while retaining a similar fraction of HzQs. As a test, we apply our method to the Pan-STARRS Data Release 1 (PS1) source catalogue within the HETDEX Spring field area on the sky, covering 400 sq. deg. and coinciding with deep radio data from the LOFAR Two-metre Sky Survey Data Release 1. From an initial sample of ∼5 × 105 sources in PS1, our selection shortlists 251 candidate HzQs, which are further reduced to 63 after visual inspection. Shallow spectroscopic follow-up of 13 high-probability HzQs resulted in the confirmation of a previously undiscovered quasar at z = 5.66 with photometric colours i − z = 1.4, lying outside the typically probed regions when selecting HzQs based on colours. This discovery demonstrates the efficacy of our probabilistic HzQ selection method in selecting more complete HzQ samples, which holds promise when employed on large existing and upcoming photometric data sets.
6

D’Isanto, A., and K. L. Polsterer. "Photometric redshift estimation via deep learning." Astronomy & Astrophysics 609 (January 2018): A111. http://dx.doi.org/10.1051/0004-6361/201731326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims. We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods. A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results. We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions. The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
7

Duncan, Kenneth J. "All-purpose, all-sky photometric redshifts for the Legacy Imaging Surveys Data Release 8." Monthly Notices of the Royal Astronomical Society 512, no. 3 (March 8, 2022): 3662–83. http://dx.doi.org/10.1093/mnras/stac608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT In this paper, we present photometric redshift (photo-z) estimates for the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys, currently the most sensitive optical survey covering the majority of the extragalactic sky. Our photo-z methodology is based on a machine-learning approach, using sparse Gaussian processes augmented with Gaussian mixture models (GMMs) that allow regions of parameter space to be identified and trained separately in a purely data-driven way. The same GMMs are also used to calculate cost-sensitive learning weights that mitigate biases in the spectroscopic training sample. By design, this approach aims to produce reliable and unbiased predictions for all parts of the parameter space present in wide area surveys. Compared to previous literature estimates using the same underlying photometry, our photo-zs are significantly less biased and more accurate at z &gt; 1, with negligible loss in precision or reliability for resolved galaxies at z &lt; 1. Our photo-z estimates offer accurate predictions for rare high-value populations within the parent sample, including optically selected quasars at the highest redshifts (z &gt; 6), as well as X-ray or radio continuum selected populations across a broad range of flux (densities) and redshift. Deriving photo-z estimates for the full Legacy Imaging Surveys Data Release 8, the catalogues provided in this work offer photo-z estimates predicted to be of high quality for ≳9 × 108 galaxies over ${\sim}19\, 400\, \text{deg}^{2}$ and spanning 0 &lt; z ≲ 7, offering one of the most extensive samples of redshift estimates ever produced.
8

Jang, J. K., Sukyoung K. Yi, Yohan Dubois, Jinsu Rhee, Christophe Pichon, Taysun Kimm, Julien Devriendt, et al. "Translators of Galaxy Morphology Indicators between Observation and Simulation." Astrophysical Journal 950, no. 1 (June 1, 2023): 4. http://dx.doi.org/10.3847/1538-4357/accd68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Based on the recent advancements in numerical simulations of galaxy formation, we anticipate the achievement of realistic models of galaxies in the near future. Morphology is the most basic and fundamental property of galaxies, yet observations and simulations still use different methods to determine galaxy morphology, making it difficult to compare them. We hereby perform a test on the recent NewHorizon simulation, which has spatial and mass resolutions that are remarkably high for a large-volume simulation, to resolve the situation. We generate mock images for the simulated galaxies using SKIRT, which calculates complex radiative transfer processes in each galaxy. We measure morphological and kinematic indicators using photometric and spectroscopic methods following observers’ techniques. We also measure the kinematic disk-to-total ratios using the Gaussian mixture model and assume that they represent the true structural composition of galaxies. We found that spectroscopic indicators such as V/σ and λ R closely trace the kinematic disk-to-total ratios. In contrast, photometric disk-to-total ratios based on the radial profile fitting method often fail to recover the true kinematic structure of galaxies, especially small ones. We provide translating equations between various morphological indicators.
9

Escudero, Carlos G., Arianna Cortesi, Favio R. Faifer, Leandro A. Sesto, Analía V. Smith Castelli, Evelyn J. Johnston, Victoria Reynaldi, et al. "The complex globular cluster system of the S0 galaxy NGC 4382 in the outskirts of the Virgo Cluster." Monthly Notices of the Royal Astronomical Society 511, no. 1 (January 8, 2022): 393–412. http://dx.doi.org/10.1093/mnras/stac021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT NGC 4382 is a merger-remnant galaxy that has been classified as morphological type E2, S0, and even Sa. In this work, we performed a photometric and spectroscopic analysis of the globular cluster (GC) system of this peculiar galaxy in order to provide additional information about its history. We used a combination of photometric data in different filters, and multiobject and long-slit spectroscopic data obtained using the Gemini/GMOS instrument. The photometric analysis of the GC system, using the Gaussian Mixture Model algorithm in the colour plane, reveals a complex colour distribution within Rgal &lt; 5 arcmin (26.1 kpc), showing four different groups: the typical blue and red subpopulations, a group with intermediate colours, and the fourth group towards even redder colours. From the spectroscopic analysis of 47 GCs, confirmed members of NGC 4382 based on radial velocities, we verified 3 of the 4 photometric groups from the analysis of their stellar populations using the ULySS code. NGC 4382 presents the classic blue (10.4 ± 2.8 Gyr, [Fe/H] = −1.48 ± 0.18 dex), and red (12.1 ± 2.3 Gyr, [Fe/H] = −0.64 ± 0.26 dex) GCs formed earlier in the lifetime of the galaxy, and a third group of young GCs (2.2 ± 0.9 Gyr; [Fe/H] = −0.05 ± 0.28 dex). Finally, analysis of long-slit data of the galaxy reveals a luminosity-weighted mean age for the stellar population of ∼2.7 Gyr, and an increasing metallicity from [Fe/H] = −0.1 to +0.2 dex in Rgal &lt; 10 arcsec (0.87 kpc). These values, and other morphological signatures in the galaxy, are in good agreement with the younger group of GCs, indicating a common origin as a result of a recent merger.
10

Tardugno Poleo, Valentina, Steven L. Finkelstein, Gene Leung, Erin Mentuch Cooper, Karl Gebhardt, Daniel J. Farrow, Eric Gawiser, et al. "Identifying Active Galactic Nuclei at z ∼ 3 from the HETDEX Survey Using Machine Learning." Astronomical Journal 165, no. 4 (March 10, 2023): 153. http://dx.doi.org/10.3847/1538-3881/acba92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract We used data from the Hobby–Eberly Telescope Dark Energy Experiment (HETDEX) to study the incidence of AGN in continuum-selected galaxies at z ∼ 3. From optical and infrared imaging in the 24 deg2 Spitzer HETDEX Exploratory Large Area survey, we constructed a sample of photometric-redshift selected z ∼ 3 galaxies. We extracted HETDEX spectra at the position of 716 of these sources and used machine-learning methods to identify those which exhibited AGN-like features. The dimensionality of the spectra was reduced using an autoencoder, and the latent space was visualized through t-distributed stochastic neighbor embedding. Gaussian mixture models were employed to cluster the encoded data and a labeled data set was used to label each cluster as either AGN, stars, high-redshift galaxies, or low-redshift galaxies. Our photometric redshift (photoz) sample was labeled with an estimated 92% overall accuracy, an AGN accuracy of 83%, and an AGN contamination of 5%. The number of identified AGN was used to measure an AGN fraction for different magnitude bins. The ultraviolet (UV) absolute magnitude where the AGN fraction reaches 50% is M UV = −23.8. When combined with results in the literature, our measurements of AGN fraction imply that the bright end of the galaxy luminosity function exhibits a power law rather than exponential decline, with a relatively shallow faint-end slope for the z ∼ 3 AGN luminosity function.

Dissertations / Theses on the topic "Photometric gaussian mixtures":

1

Guerbas, Seif Eddine. "Modélisation adaptée des images omnidirectionnelles pour agrandir le domaine de convergence de l'asservissement visuel virtuel direct." Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La vision omnidirectionnelle capture dans toutes les directions une scène en temps réel grâce à un champ de vision plus étendu que celui offert par une caméra conventionnelle. Au sein de l'environnement, relier les caractéristiques visuelles contenues dans les images de la caméra à ses mouvements est une problématique centrale pour l'asservissement visuel. Les approches directes se caractérisent cependant par un domaine de convergence limité. La thèse que nous présentons a pour premier objectif d'étendre significativement ce dernier dans le cadre de l'asservissement visuel virtuel en représentant l'image omnidirectionnelle par un Mélange de Gaussiennes Photométriques (MGP). Cette approche est étendue dans un deuxième temps au recalage et au suivi direct basé modèle 3D dans les images omnidirectionnelles. Cela permet d'étudier la localisation d'un robot mobile équipé d'une caméra panoramique dans un modèle urbain 3D. Les expérimentations ont été conduites en environnement virtuel et en utilisant des images réelles capturées à l'aide d'un robot mobile et d'un véhicule. Les résultats montrent un agrandissement significatif du domaine de convergence qui permet alors une grande robustesse face à d'importants mouvements inter-images
Omnidirectional vision captures a scene in real-time in all directions with a wider field of view than a conventional camera. Within the environment, linking the visual features contained in the camera images to its movements is a central issue for visual servoing. Direct approaches, however, are characterized by a limited range of convergence. The main objective of this dissertation is to significantly extend the area of convergence in the context of virtual visual servoing by representing the omnidirectional image by a Photometric Gaussian Mixtures (PGM). This approach is further extended in the second step to the registration and direct tracking based on 3D models in omnidirectional images. This proposed methodology allows for studying the localization of a mobile robot equipped with a panoramic camera in a 3D urban model. The results show a significant enlargement of the convergence domain for high robustness to large interframe movements, as evidenced by experiments in virtual environments and with real images captured with a mobile robot and a vehicle

Conference papers on the topic "Photometric gaussian mixtures":

1

Crombez, Nathan, Guillaume Caron, and El Mustapha Mouaddib. "Photometric Gaussian mixtures based visual servoing." In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015. http://dx.doi.org/10.1109/iros.2015.7354154.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography