Добірка наукової літератури з теми "Penalized log-likelihood criterion"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Penalized log-likelihood criterion".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Penalized log-likelihood criterion"

1

Gao, Yongfeng, Siming Lu, Yongyi Shi, Shaojie Chang, Hao Zhang, Wei Hou, Lihong Li, and Zhengrong Liang. "A Joint-Parameter Estimation and Bayesian Reconstruction Approach to Low-Dose CT." Sensors 23, no. 3 (January 26, 2023): 1374. http://dx.doi.org/10.3390/s23031374.

Повний текст джерела
Анотація:
Most penalized maximum likelihood methods for tomographic image reconstruction based on Bayes’ law include a freely adjustable hyperparameter to balance the data fidelity term and the prior/penalty term for a specific noise–resolution tradeoff. The hyperparameter is determined empirically via a trial-and-error fashion in many applications, which then selects the optimal result from multiple iterative reconstructions. These penalized methods are not only time-consuming by their iterative nature, but also require manual adjustment. This study aims to investigate a theory-based strategy for Bayesian image reconstruction without a freely adjustable hyperparameter, to substantially save time and computational resources. The Bayesian image reconstruction problem is formulated by two probability density functions (PDFs), one for the data fidelity term and the other for the prior term. When formulating these PDFs, we introduce two parameters. While these two parameters ensure the PDFs completely describe the data and prior terms, they cannot be determined by the acquired data; thus, they are called complete but unobservable parameters. Estimating these two parameters becomes possible under the conditional expectation and maximization for the image reconstruction, given the acquired data and the PDFs. This leads to an iterative algorithm, which jointly estimates the two parameters and computes the to-be reconstructed image by maximizing a posteriori probability, denoted as joint-parameter-Bayes. In addition to the theoretical formulation, comprehensive simulation experiments are performed to analyze the stopping criterion of the iterative joint-parameter-Bayes method. Finally, given the data, an optimal reconstruction is obtained without any freely adjustable hyperparameter by satisfying the PDF condition for both the data likelihood and the prior probability, and by satisfying the stopping criterion. Moreover, the stability of joint-parameter-Bayes is investigated through factors such as initialization, the PDF specification, and renormalization in an iterative manner. Both phantom simulation and clinical patient data results show that joint-parameter-Bayes can provide comparable reconstructed image quality compared to the conventional methods, but with much less reconstruction time. To see the response of the algorithm to different types of noise, three common noise models are introduced to the simulation data, including white Gaussian noise to post-log sinogram data, Poisson-like signal-dependent noise to post-log sinogram data and Poisson noise to the pre-log transmission data. The experimental outcomes of the white Gaussian noise reveal that the two parameters estimated by the joint-parameter-Bayes method agree well with simulations. It is observed that the parameter introduced to satisfy the prior’s PDF is more sensitive to stopping the iteration process for all three noise models. A stability investigation showed that the initial image by filtered back projection is very robust. Clinical patient data demonstrated the effectiveness of the proposed joint-parameter-Bayes and stopping criterion.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tang, Jiarui, An-Min Tang, and Niansheng Tang. "Variable selection for joint models of multivariate skew-normal longitudinal and survival data." Statistical Methods in Medical Research, July 5, 2023. http://dx.doi.org/10.1177/09622802231181767.

Повний текст джерела
Анотація:
Many joint models of multivariate skew-normal longitudinal and survival data have been presented to accommodate for the non-normality of longitudinal outcomes in recent years. But existing work did not consider variable selection. This article investigates simultaneous parameter estimation and variable selection in joint modeling of longitudinal and survival data. The penalized splines technique is used to estimate unknown log baseline hazard function, the rectangle integral method is adopted to approximate conditional survival function. Monte Carlo expectation-maximization algorithm is developed to estimate model parameters. Based on local linear approximations to conditional expectation of likelihood function and penalty function, a one-step sparse estimation procedure is proposed to circumvent the computationally challenge in optimizing the penalized conditional expectation of likelihood function, which is utilized to select significant covariates and trajectory functions, and identify the departure from normality of longitudinal data. The conditional expectation of likelihood function-based Bayesian information criterion is developed to select the optimal tuning parameter. Simulation studies and a real example from the clinical trial are used to illustrate the proposed methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pluntz, Matthieu, Cyril Dalmasso, Pascale Tubert‐Bitter, and Ismaïl Ahmed. "A Simple Information Criterion for Variable Selection in High‐Dimensional Regression." Statistics in Medicine, December 12, 2024. https://doi.org/10.1002/sim.10275.

Повний текст джерела
Анотація:
ABSTRACTHigh‐dimensional regression problems, for example with genomic or drug exposure data, typically involve automated selection of a sparse set of regressors. Penalized regression methods like the LASSO can deliver a family of candidate sparse models. To select one, there are criteria balancing log‐likelihood and model size, the most common being AIC and BIC. These two methods do not take into account the implicit multiple testing performed when selecting variables in a high‐dimensional regression, which makes them too liberal. We propose the extended AIC (EAIC), a new information criterion for sparse model selection in high‐dimensional regressions. It allows for asymptotic FWER control when the candidate regressors are independent. It is based on a simple formula involving model log‐likelihood, model size, the total number of candidate regressors, and the FWER target. In a simulation study over a wide range of linear and logistic regression settings, we assessed the variable selection performance of the EAIC and of other information criteria (including some that also use the number of candidate regressors: mBIC, mAIC, and EBIC) in conjunction with the LASSO. Our method controls the FWER in nearly all settings, in contrast to the AIC and BIC, which produce many false positives. We also illustrate it for the automated signal detection of adverse drug reactions on the French pharmacovigilance spontaneous reporting database.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Leonardi, Florencia, Rodrigo Carvalho, and Iara Frondana. "Structure recovery for partially observed discrete Markov random fields on graphs under not necessarily positive distributions." Scandinavian Journal of Statistics, August 2, 2023. http://dx.doi.org/10.1111/sjos.12674.

Повний текст джерела
Анотація:
AbstractWe propose a penalised conditional likelihood criterion to estimate the basic neighbourhood of each node in a discrete Markov random field that can be partially observed. We prove the convergence of the estimator in the case of a finite or countable infinite set of nodes. The estimated neighbourhoods can be combined to estimate the underlying graph. In the finite case, the graph can be recovered with probability one. In contrast, we can recover any finite sub‐graph with probability one in the countable infinite case by allowing the candidate neighbourhoods to grow as a function o(log n), with n the sample size. Our method requires minimal assumptions on the probability distribution, and contrary to other approaches in the literature, the usual positivity condition is not needed. We evaluate the estimator's performance on simulated data and apply the methodology to a real dataset of stock index markets in different countries.This article is protected by copyright. All rights reserved.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Penalized log-likelihood criterion"

1

Aubert, Julien. "Théorie de l'estimation pour les processus d'apprentissage." Electronic Thesis or Diss., Université Côte d'Azur, 2025. http://www.theses.fr/2025COAZ5001.

Повний текст джерела
Анотація:
Cette thèse examine le problème de l'estimation du processus d'apprentissage d'un individu au cours d'une tâche à partir des actions qu'il effectue. Cette question se situe à l'intersection de la cognition, des statistiques et de l'apprentissage par renforcement, et implique le développement de modèles qui capturent avec précision la dynamique de l'apprentissage, l'estimation des paramètres des modèles et la sélection du modèle le mieux adapté. L'une des principales difficultés réside dans le fait que l'apprentissage, par sa nature même, conduit à des données non indépendantes et non stationnaires, puisque l'individu ajuste son processus de décision en fonction du résultat de ses choix précédents. Les théories et méthodes statistiques existantes sont bien établies pour les données indépendantes et stationnaires, mais leur application à un cadre d'apprentissage introduit des défis inédits. L'enjeu de cette thèse est de développer une théorie statistique capable d'expliquer et de justifier les méthodes empiriques existantes. Je commence par explorer les propriétés de l'estimateur du maximum de vraisemblance sur un modèle d'apprentissage fondé sur un problème de bandit. Je présente ensuite des résultats théoriques généraux sur la sélection de modèles à partir d'un critère de log-vraisemblance pénalisée pour des données non stationnaires et dépendantes. Ces résultats nécessitent le développement d'une nouvelle inégalité de concentration pour des suprema de processus renormalisés. Je présente également une procédure de hold-out et des garanties théoriques pour cette procédure dans un cadre d'apprentissage. Ces résultats théoriques sont étayés par des applications sur des données synthétiques et sur des expériences cognitives réelles en psychologie et en éthologie
This thesis considers the problem of estimating the learning process of an individual during a task based on observed choices or actions of that individual. This question lies at the intersection of cognition, statistics, and reinforcement learning, and involves developing models that accurately capture the dynamics of learning, estimating model parameters, and selecting the best-fitting model. A key difficulty is that learning, by nature, leads to non-independent and non-stationary data, as the individual selects its actions depending on the outcome of its previous choices.Existing statistical theories and methods are well-established for independent and stationary data, but their application to a learning framework introduces significant challenges. This thesis seeks to bridge the gap between empirical methods and theoretical guarantees in computational modeling. I first explore the properties of maximum likelihood estimation on a model of learning based on a bandit problem. I then present general theoretical results on penalized log-likelihood model selection for non-stationary and dependent data, for which I develop a new concentration inequality for the suprema of renormalized processes. I also introduce a hold-out procedure and theoretical guarantees for it in a learning framework. These theoretical results are supported with applications on synthetic data and on real cognitive experiments in psychology and ethology
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Penalized log-likelihood criterion"

1

Dawid, A. P. "Prequential Analysis, Stochastic Complexity and Bayesian Inference." In Bayesian Statistics 4, 109–26. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780198522669.003.0007.

Повний текст джерела
Анотація:
Abstract Prequential Analysis addresses the empirical assessment of statistical models, and of their associated forecasting techniques, using techniques borrowed from the methodology of Probability Forecasting. In the theory of Stochastic Complexity, the empirical assessment of a model is based on the minimal length of a coded message needed to transmit the data. It turns out that this is essentially the same as prequential assessment based on the logarithmic scoring rule. These approaches are particularly well suited to model selection, where they provide further justification for the use of Bayes factors, or the asymptotically equivalent Jeffreys-Schwarz-BIC penalized log-likelihood criterion.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Anderson, Raymond A. "Stats & Maths & Unicorns." In Credit Intelligence & Modelling, 405–34. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0011.

Повний текст джерела
Анотація:
This chapter covers basic statistical concepts. Most statistics relate to hypothesis testing, and others to variable selection and model fitting. The name is because an exact match between a theoretical and empirical distribution is as rare as a unicorn. (1) Dispersion—measures of random variations—variance and its inflation factor, covariance and correlations {Pearson’s product-moment, Spearman’s rank order}, and the Mahalanobis distance. (2) Goodness-of-fit—do observations match expectations? This applies to both continuous dependent variables {R-squared and adjusted R2} and categorical {Pearson’s chi-square, Hosmer–Lemeshow statistic}. (3) Likelihood—assesses estimates’ goodness-of-fit to binary dependent variables {log-likelihood, deviance}, plus the Akaike and Bayesian information criteria used to penalize complexity. (4) The Holy Trinity of Statistics—i) Neyman–Pearson’s ‘likelihood ratio’—the basis for model comparisons; ii) Wald’s chi-square—for potential variable removal; iii) Rao’s score chi-square—for potential variable inclusion. These are all used in Logistic Regression.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії