Siga este enlace para ver otros tipos de publicaciones sobre el tema: Bayesian Moment Matching.

Artículos de revistas sobre el tema "Bayesian Moment Matching"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 19 mejores artículos de revistas para su investigación sobre el tema "Bayesian Moment Matching".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zhang, Qiong y Yongjia Song. "Moment-Matching-Based Conjugacy Approximation for Bayesian Ranking and Selection". ACM Transactions on Modeling and Computer Simulation 27, n.º 4 (20 de diciembre de 2017): 1–23. http://dx.doi.org/10.1145/3149013.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Franke, Reiner, Tae-Seok Jang y Stephen Sacht. "Moment matching versus Bayesian estimation: Backward-looking behaviour in a New-Keynesian baseline model". North American Journal of Economics and Finance 31 (enero de 2015): 126–54. http://dx.doi.org/10.1016/j.najef.2014.11.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cao, Zhixing y Ramon Grima. "Accuracy of parameter estimation for auto-regulatory transcriptional feedback loops from noisy data". Journal of The Royal Society Interface 16, n.º 153 (3 de abril de 2019): 20180967. http://dx.doi.org/10.1098/rsif.2018.0967.

Texto completo
Resumen
Bayesian and non-Bayesian moment-based inference methods are commonly used to estimate the parameters defining stochastic models of gene regulatory networks from noisy single cell or population snapshot data. However, a systematic investigation of the accuracy of the predictions of these methods remains missing. Here, we present the results of such a study using synthetic noisy data of a negative auto-regulatory transcriptional feedback loop, one of the most common building blocks of complex gene regulatory networks. We study the error in parameter estimation as a function of (i) number of cells in each sample; (ii) the number of time points; (iii) the highest-order moment of protein fluctuations used for inference; (iv) the moment-closure method used for likelihood approximation. We find that for sample sizes typical of flow cytometry experiments, parameter estimation by maximizing the likelihood is as accurate as using Bayesian methods but with a much reduced computational time. We also show that the choice of moment-closure method is the crucial factor determining the maximum achievable accuracy of moment-based inference methods. Common likelihood approximation methods based on the linear noise approximation or the zero cumulants closure perform poorly for feedback loops with large protein–DNA binding rates or large protein bursts; this is exacerbated for highly heterogeneous cell populations. By contrast, approximating the likelihood using the linear-mapping approximation or conditional derivative matching leads to highly accurate parameter estimates for a wide range of conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Nakagawa, Tomoyuki y Shintaro Hashimoto. "On Default Priors for Robust Bayesian Estimation with Divergences". Entropy 23, n.º 1 (27 de diciembre de 2020): 29. http://dx.doi.org/10.3390/e23010029.

Texto completo
Resumen
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ-divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also proposed in recent years. In the objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the γ-divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yiu, A., R. J. B. Goudie y B. D. M. Tom. "Inference under unequal probability sampling with the Bayesian exponentially tilted empirical likelihood". Biometrika 107, n.º 4 (21 de mayo de 2020): 857–73. http://dx.doi.org/10.1093/biomet/asaa028.

Texto completo
Resumen
Summary Fully Bayesian inference in the presence of unequal probability sampling requires stronger structural assumptions on the data-generating distribution than frequentist semiparametric methods, but offers the potential for improved small-sample inference and convenient evidence synthesis. We demonstrate that the Bayesian exponentially tilted empirical likelihood can be used to combine the practical benefits of Bayesian inference with the robustness and attractive large-sample properties of frequentist approaches. Estimators defined as the solutions to unbiased estimating equations can be used to define a semiparametric model through the set of corresponding moment constraints. We prove Bernstein–von Mises theorems which show that the posterior constructed from the resulting exponentially tilted empirical likelihood becomes approximately normal, centred at the chosen estimator with matching asymptotic variance; thus, the posterior has properties analogous to those of the estimator, such as double robustness, and the frequentist coverage of any credible set will be approximately equal to its credibility. The proposed method can be used to obtain modified versions of existing estimators with improved properties, such as guarantees that the estimator lies within the parameter space. Unlike existing Bayesian proposals, our method does not prescribe a particular choice of prior or require posterior variance correction, and simulations suggest that it provides superior performance in terms of frequentist criteria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dimas, Christos, Vassilis Alimisis, Nikolaos Uzunoglu y Paul P. Sotiriadis. "A Point-Matching Method of Moment with Sparse Bayesian Learning Applied and Evaluated in Dynamic Lung Electrical Impedance Tomography". Bioengineering 8, n.º 12 (25 de noviembre de 2021): 191. http://dx.doi.org/10.3390/bioengineering8120191.

Texto completo
Resumen
Dynamic lung imaging is a major application of Electrical Impedance Tomography (EIT) due to EIT’s exceptional temporal resolution, low cost and absence of radiation. EIT however lacks in spatial resolution and the image reconstruction is very sensitive to mismatches between the actual object’s and the reconstruction domain’s geometries, as well as to the signal noise. The non-linear nature of the reconstruction problem may also be a concern, since the lungs’ significant conductivity changes due to inhalation and exhalation. In this paper, a recently introduced method of moment is combined with a sparse Bayesian learning approach to address the non-linearity issue, provide robustness to the reconstruction problem and reduce image artefacts. To evaluate the proposed methodology, we construct three CT-based time-variant 3D thoracic structures including the basic thoracic tissues and considering 5 different breath states from end-expiration to end-inspiration. The Graz consensus reconstruction algorithm for EIT (GREIT), the correlation coefficient (CC), the root mean square error (RMSE) and the full-reference (FR) metrics are applied for the image quality assessment. Qualitative and quantitative comparison with traditional and more advanced reconstruction techniques reveals that the proposed method shows improved performance in the majority of cases and metrics. Finally, the approach is applied to single-breath online in-vivo data to qualitatively verify its applicability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Heath, Anna, Ioanna Manolopoulou y Gianluca Baio. "Estimating the Expected Value of Sample Information across Different Sample Sizes Using Moment Matching and Nonlinear Regression". Medical Decision Making 39, n.º 4 (mayo de 2019): 347–59. http://dx.doi.org/10.1177/0272989x19837983.

Texto completo
Resumen
Background. The expected value of sample information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty about the parameters underlying a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow, especially when optimizing over a large number of different designs. Methods. This article develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment-matching approach to EVSI estimation to optimize over different sample sizes for the underlying trial while retaining a similar computational cost to a single EVSI estimate. This extension calculates the posterior variance of the net monetary benefit across alternative sample sizes and then uses Bayesian nonlinear regression to estimate the EVSI across these sample sizes. Results. A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion. The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realize the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations. Our method involves rerunning the health economic model, which can be more computationally expensive than some recent alternatives, especially in complex models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Browning, Alexander P., Christopher Drovandi, Ian W. Turner, Adrianne L. Jenner y Matthew J. Simpson. "Efficient inference and identifiability analysis for differential equation models with random parameters". PLOS Computational Biology 18, n.º 11 (28 de noviembre de 2022): e1010734. http://dx.doi.org/10.1371/journal.pcbi.1010734.

Texto completo
Resumen
Heterogeneity is a dominant factor in the behaviour of many biological processes. Despite this, it is common for mathematical and statistical analyses to ignore biological heterogeneity as a source of variability in experimental data. Therefore, methods for exploring the identifiability of models that explicitly incorporate heterogeneity through variability in model parameters are relatively underdeveloped. We develop a new likelihood-based framework, based on moment matching, for inference and identifiability analysis of differential equation models that capture biological heterogeneity through parameters that vary according to probability distributions. As our novel method is based on an approximate likelihood function, it is highly flexible; we demonstrate identifiability analysis using both a frequentist approach based on profile likelihood, and a Bayesian approach based on Markov-chain Monte Carlo. Through three case studies, we demonstrate our method by providing a didactic guide to inference and identifiability analysis of hyperparameters that relate to the statistical moments of model parameters from independent observed data. Our approach has a computational cost comparable to analysis of models that neglect heterogeneity, a significant improvement over many existing alternatives. We demonstrate how analysis of random parameter models can aid better understanding of the sources of heterogeneity from biological data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Habibi, Reza. "Conditional Beta Approximation: Two Applications". Indonesian Journal of Mathematics and Applications 2, n.º 1 (31 de marzo de 2024): 9–23. http://dx.doi.org/10.21776/ub.ijma.2024.002.01.2.

Texto completo
Resumen
Suppose that X,Y are two independent positive continuous random variables. Let P=\frac{X}{X+Y} and Z=X+Y. If X, Y have gamma distributions with the same scale parameter, then P distribution will be beta and P,\ Z are independent. In the case that the distributions of these two variables are not gamma, the P distribution is well approximated by the beta distribution. However, P,\ Z are dependent. According to matching moment method, it is necessary to compute the moments of conditional distribution for beta fitting. In this paper, some new methods for computing moments of conditional distribution of P given Z are proposed. First of all, it is suggested to consider the regression method. Then Monte Carlo simulation is advised. The Bayesian posterior distribution of P is suggested. Applications of differential equations are also reviewed. These results are applied in two applications namely variance change point detection and winning percentage of gambling game are proposed. The probability of change in variance in a sequence of variables, as a leading indicator of possible change, is proposed. Similarly, the probability of winning in a sequential gambling framework is proposed. The optimal time to exit of gambling game is proposed. A game theoretic approach to problem of optimal exit time is proposed. In all cases, beta approximations are proposed. Finally, a conclusion section is also given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lu, Chi-Ken y Patrick Shafto. "Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning". Entropy 23, n.º 11 (23 de octubre de 2021): 1387. http://dx.doi.org/10.3390/e23111387.

Texto completo
Resumen
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Heilbron, Micha, Jorie van Haren, Peter Hagoort y Floris P. de Lange. "Lexical Processing Strongly Affects Reading Times But Not Skipping During Natural Reading". Open Mind 7 (2023): 757–83. http://dx.doi.org/10.1162/opmi_a_00099.

Texto completo
Resumen
Abstract In a typical text, readers look much longer at some words than at others, even skipping many altogether. Historically, researchers explained this variation via low-level visual or oculomotor factors, but today it is primarily explained via factors determining a word’s lexical processing ease, such as how well word identity can be predicted from context or discerned from parafoveal preview. While the existence of these effects is well established in controlled experiments, the relative importance of prediction, preview and low-level factors in natural reading remains unclear. Here, we address this question in three large naturalistic reading corpora (n = 104, 1.5 million words), using deep neural networks and Bayesian ideal observers to model linguistic prediction and parafoveal preview from moment to moment in natural reading. Strikingly, neither prediction nor preview was important for explaining word skipping—the vast majority of explained variation was explained by a simple oculomotor model, using just fixation position and word length. For reading times, by contrast, we found strong but independent contributions of prediction and preview, with effect sizes matching those from controlled experiments. Together, these results challenge dominant models of eye movements in reading, and instead support alternative models that describe skipping (but not reading times) as largely autonomous from word identification, and mostly determined by low-level oculomotor information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lu, Chi-Ken y Patrick Shafto. "Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning". Entropy 23, n.º 11 (20 de noviembre de 2021): 1545. http://dx.doi.org/10.3390/e23111545.

Texto completo
Resumen
Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations. We propose the conditional DGP model in which the latent GPs are directly supported by the fixed lower fidelity data. Then the moment matching method is applied to approximate the marginal prior of conditional DGP with a GP. The obtained effective kernels are implicit functions of the lower-fidelity data, manifesting the expressivity contributed by distribution propagation within the hierarchy. The hyperparameters are learned via optimizing the approximate marginal likelihood. Experiments with synthetic and high dimensional data show comparable performance against other multi-fidelity regression methods, variational inference, and multi-output GP. We conclude that, with the low fidelity data and the hierarchical DGP structure, the effective kernel encodes the inductive bias for true function allowing the compositional freedom.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Amini Farsani, Zahra y Volker J. Schmid. "Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis". Entropy 24, n.º 2 (20 de enero de 2022): 155. http://dx.doi.org/10.3390/e24020155.

Texto completo
Resumen
Background: For the kinetic models used in contrast-based medical imaging, the assignment of the arterial input function named AIF is essential for the estimation of the physiological parameters of the tissue via solving an optimization problem. Objective: In the current study, we estimate the AIF relayed on the modified maximum entropy method. The effectiveness of several numerical methods to determine kinetic parameters and the AIF is evaluated—in situations where enough information about the AIF is not available. The purpose of this study is to identify an appropriate method for estimating this function. Materials and Methods: The modified algorithm is a mixture of the maximum entropy approach with an optimization method, named the teaching-learning method. In here, we applied this algorithm in a Bayesian framework to estimate the kinetic parameters when specifying the unique form of the AIF by the maximum entropy method. We assessed the proficiency of the proposed method for assigning the kinetic parameters in the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), when determining AIF with some other parameter-estimation methods and a standard fixed AIF method. A previously analyzed dataset consisting of contrast agent concentrations in tissue and plasma was used. Results and Conclusions: We compared the accuracy of the results for the estimated parameters obtained from the MMEM with those of the empirical method, maximum likelihood method, moment matching (“method of moments”), the least-square method, the modified maximum likelihood approach, and our previous work. Since the current algorithm does not have the problem of starting point in the parameter estimation phase, it could find the best and nearest model to the empirical model of data, and therefore, the results indicated the Weibull distribution as an appropriate and robust AIF and also illustrated the power and effectiveness of the proposed method to estimate the kinetic parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Pan, Yuangang, Ivor W. Tsang, Yueming Lyu, Avinash K. Singh y Chin-Teng Lin. "Online Mental Fatigue Monitoring via Indirect Brain Dynamics Evaluation". Neural Computation 33, n.º 6 (13 de mayo de 2021): 1616–55. http://dx.doi.org/10.1162/neco_a_01382.

Texto completo
Resumen
Driver mental fatigue leads to thousands of traffic accidents. The increasing quality and availability of low-cost electroencephalogram (EEG) systems offer possibilities for practical fatigue monitoring. However, non-data-driven methods, designed for practical, complex situations, usually rely on handcrafted data statistics of EEG signals. To reduce human involvement, we introduce a data-driven methodology for online mental fatigue detection: self-weight ordinal regression (SWORE). Reaction time (RT), referring to the length of time people take to react to an emergency, is widely considered an objective behavioral measure for mental fatigue state. Since regression methods are sensitive to extreme RTs, we propose an indirect RT estimation based on preferences to explore the relationship between EEG and RT, which generalizes to any scenario when an objective fatigue indicator is available. In particular, SWORE evaluates the noisy EEG signals from multiple channels in terms of two states: shaking state and steady state. Modeling the shaking state can discriminate the reliable channels from the uninformative ones, while modeling the steady state can suppress the task-nonrelevant fluctuation within each channel. In addition, an online generalized Bayesian moment matching (online GBMM) algorithm is proposed to online-calibrate SWORE efficiently per participant. Experimental results with 40 participants show that SWORE can maximally achieve consistent with RT, demonstrating the feasibility and adaptability of our proposed framework in practical mental fatigue estimation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wang, Yilin y Baokuan Chang. "Extraction of Human Motion Information from Digital Video Based on 3D Poisson Equation". Advances in Mathematical Physics 2021 (28 de diciembre de 2021): 1–11. http://dx.doi.org/10.1155/2021/1268747.

Texto completo
Resumen
Based on the 3D Poisson equation, this paper extracts the features of the digital video human body action sequence. By solving the Poisson equation on the silhouette sequence, the time and space features, time and space structure features, shape features, and orientation features can be obtained. First, we use the silhouette structure features in three-dimensional space-time and the orientation features of the silhouette in three-dimensional space-time to represent the local features of the silhouette sequence and use the 3D Zernike moment feature to represent the overall features of the silhouette sequence. Secondly, we combine the Bayesian classifier and AdaBoost classifier to learn and classify the features of human action sequences, conduct experiments on the Weizmann video database, and conduct multiple experiments using the method of classifying samples and selecting partial combinations for training. Then, using the recognition algorithm of motion capture, after the above process, the three-dimensional model is obtained and matched with the model in the three-dimensional model database, the sequence with the smallest distance is calculated, and the corresponding skeleton is outputted as the results of action capture. During the experiment, the human motion tracking method based on the university matching kernel (EMK) image kernel descriptor was used; that is, the scale invariant operator was used to count the characteristics of multiple training images, and finally, the high-dimensional feature space was mapped into the low-dimensional to obtain the feature space approximating the Gaussian kernel. Based on the above analysis, the main user has prior knowledge of the network environment. The experimental results show that the method in this paper can effectively extract the characteristics of human body movements and has a good classification effect for bending, one-foot jumping, vertical jumping, waving, and other movements. Due to the linear separability of the data in the kernel space, fast linear interpolation regression is performed on the features in the feature space, which significantly improves the robustness and accuracy of the estimation of the human motion pose in the image sequence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Chin, Kuo-Hsuan y Tzu-Yun Huang. "An Empirical Study of Taiwan’s Real Business Cycle". International Journal of Economics and Finance 10, n.º 2 (10 de enero de 2018): 124. http://dx.doi.org/10.5539/ijef.v10n2p124.

Texto completo
Resumen
We study the characteristics of the real business cycle and the sources of the economic fluctuation in Taiwan over the last forty years, when it experienced both developing and developed stages of the economy, by considering a small open economy real business cycle model with financial friction. In particular, the breaking time point that distinguishes between developing and developed stages of the economy in Taiwan is chosen on the basis of the International Monetary Fund (IMF). We use a Bayesian approach to obtain the posterior densities for the structural parameters of interest. Conditioning on the Bayesian point estimates, the posterior mean in particular, we generate a set of statistical moments and related statistics that characterize the features and sources of the real business cycle. We find that a real business cycle model with financial friction explains the features of real business cycle in a developing stage of Taiwan’s economy well. However, the results it provides are unsatisfactory for matching the characteristics of real business cycle in a developed stage of Taiwan’s economy. In addition, the technology shock explains a large fraction of the economic fluctuation, particularly in real output, consumption and investment. More precisely, permanent technology shock explains a larger fraction of the economic fluctuation than a transitory technology shock.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Buder, S., K. Lind, M. K. Ness, M. Asplund, L. Duong, J. Lin, J. Kos et al. "The GALAH survey: An abundance, age, and kinematic inventory of the solar neighbourhood made with TGAS". Astronomy & Astrophysics 624 (abril de 2019): A19. http://dx.doi.org/10.1051/0004-6361/201833218.

Texto completo
Resumen
The overlap between the spectroscopic Galactic Archaeology with HERMES (GALAH) survey and Gaia provides a high-dimensional chemodynamical space of unprecedented size. We present a first analysis of a subset of this overlap, of 7066 dwarf, turn-off, and sub-giant stars. These stars have spectra from the GALAH survey and high parallax precision from the Gaia DR1 Tycho-Gaia Astrometric Solution. We investigate correlations between chemical compositions, ages, and kinematics for this sample. Stellar parameters and elemental abundances are derived from the GALAH spectra with the spectral synthesis code SPECTROSCOPY MADE EASY. We determine kinematics and dynamics, including action angles, from the Gaia astrometry and GALAH radial velocities. Stellar masses and ages are determined with Bayesian isochrone matching, using our derived stellar parameters and absolute magnitudes. We report measurements of Li, C, O, Na, Mg, Al, Si, K, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, Zn, Y, as well as Ba and we note that we have employed non-LTE calculations for Li, O, Al, and Fe. We show that the use of astrometric and photometric data improves the accuracy of the derived spectroscopic parameters, especially log g. Focusing our investigation on the correlations between stellar age, iron abundance [Fe/H], and mean alpha-enhancement [α/Fe] of the magnitude-selected sample, we recover the result that stars of the high-α sequence are typically older than stars in the low-α sequence, the latter spanning iron abundances of −0.7 < [Fe/H] < +0.5. While these two sequences become indistinguishable in [α/Fe] vs. [Fe/H] at the metal-rich regime, we find that age can be used to separate stars from the extended high-α and the low-α sequence even in this regime. When dissecting the sample by stellar age, we find that the old stars (>8 Gyr) have lower angular momenta Lz than the Sun, which implies that they are on eccentric orbits and originate from the inner disc. Contrary to some previous smaller scale studies we find a continuous evolution in the high-α-sequence up to super-solar [Fe/H] rather than a gap, which has been interpreted as a separate “high-α metal-rich” population. Stars in our sample that are younger than 10 Gyr, are mainly found on the low α-sequence and show a gradient in Lz from low [Fe/H] (Lz > Lz, ⊙) towards higher [Fe/H] (Lz < Lz, ⊙), which implies that the stars at the ends of this sequence are likely not originating from the close solar vicinity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Goedhart, S., W. D. Cotton, F. Camilo, M. A. Thompson, G. Umana, M. Bietenholz, P. A. Woudt et al. "The SARAO MeerKAT 1.3 GHz Galactic Plane Survey". Monthly Notices of the Royal Astronomical Society, 3 de mayo de 2024. http://dx.doi.org/10.1093/mnras/stae1166.

Texto completo
Resumen
Abstract We present the SARAO MeerKAT Galactic Plane Survey (SMGPS), a 1.3 GHz continuum survey of almost half of the Galactic Plane (251○ ≤l ≤ 358○ and 2○ ≤l ≤ 61○ at |b| ≤ 1${_{.}^{\circ}}$5). SMGPS is the largest, most sensitive and highest angular resolution 1 GHz survey of the Plane yet carried out, with an angular resolution of 8″ and a broadband RMS sensitivity of ∼10–20 μJy beam−1. Here we describe the first publicly available data release from SMGPS which comprises data cubes of frequency-resolved images over 908–1656 MHz, power law fits to the images, and broadband zeroth moment integrated intensity images. A thorough assessment of the data quality and guidance for future usage of the data products are given. Finally, we discuss the tremendous potential of SMGPS by showcasing highlights of the Galactic and extragalactic science that it permits. These highlights include the discovery of a new population of non-thermal radio filaments; identification of new candidate supernova remnants, pulsar wind nebulae and planetary nebulae; improved radio/mid-IR classification of rare Luminous Blue Variables and discovery of associated extended radio nebulae; new radio stars identified by Bayesian cross-matching techniques; the realisation that many of the largest radio-quiet WISE H ii region candidates are not true H ii regions; and a large sample of previously undiscovered background H i galaxies in the Zone of Avoidance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Bock, Andreas y Colin J. Cotter. "Learning landmark geodesics using the ensemble Kalman filter". Foundations of Data Science, 2021, 0. http://dx.doi.org/10.3934/fods.2021020.

Texto completo
Resumen
<p style='text-indent:20px;'>We study the problem of diffeomorphometric geodesic landmark matching where the objective is to find a diffeomorphism that, via its group action, maps between two sets of landmarks. It is well-known that the motion of the landmarks, and thereby the diffeomorphism, can be encoded by an initial momentum leading to a formulation where the landmark matching problem can be solved as an optimisation problem over such momenta. The novelty of our work lies in the application of a derivative-free Bayesian inverse method for learning the optimal momentum encoding the diffeomorphic mapping between the template and the target. The method we apply is the ensemble Kalman filter, an extension of the Kalman filter to nonlinear operators. We describe an efficient implementation of the algorithm and show several numerical results for various target shapes.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía