Academic literature on the topic 'Multimodal Posteriors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal Posteriors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multimodal Posteriors"

1

Speagle, Joshua S. "dynesty: a dynamic nested sampling package for estimating Bayesian posteriors and evidences." Monthly Notices of the Royal Astronomical Society 493, no. 3 (February 3, 2020): 3132–58. http://dx.doi.org/10.1093/mnras/staa278.

Full text
Abstract:
ABSTRACT We present dynesty, a public, open-source, python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using the dynamic nested sampling methods developed by Higson et al. By adaptively allocating samples based on posterior structure, dynamic nested sampling has the benefits of Markov chain Monte Carlo (MCMC) algorithms that focus exclusively on posterior estimation while retaining nested sampling’s ability to estimate evidences and sample from complex, multimodal distributions. We provide an overview of nested sampling, its extension to dynamic nested sampling, the algorithmic challenges involved, and the various approaches taken to solve them in this and previous work. We then examine dynesty’s performance on a variety of toy problems along with several astronomical applications. We find in particular problems dynesty can provide substantial improvements in sampling efficiency compared to popular MCMC approaches in the astronomical literature. More detailed statistical results related to nested sampling are also included in the appendix.
APA, Harvard, Vancouver, ISO, and other styles
2

Campante, Tiago L., Tanda Li, J. M. Joel Ong, Enrico Corsaro, Margarida S. Cunha, Timothy R. Bedding, Diego Bossini, et al. "Revisiting the Red Giant Branch Hosts KOI-3886 and ι Draconis. Detailed Asteroseismic Modeling and Consolidated Stellar Parameters." Astronomical Journal 165, no. 5 (April 27, 2023): 214. http://dx.doi.org/10.3847/1538-3881/acc9c1.

Full text
Abstract:
Abstract Asteroseismology is playing an increasingly important role in the characterization of red giant host stars and their planetary systems. Here, we conduct detailed asteroseismic modeling of the evolved red giant branch (RGB) hosts KOI-3886 and ι Draconis, making use of end-of-mission Kepler (KOI-3886) and multisector TESS (ι Draconis) time-series photometry. We also model the benchmark star KIC 8410637, a member of an eclipsing binary, thus providing a direct test to the seismic determination. We test the impact of adopting different sets of observed modes as seismic constraints. Inclusion of ℓ = 1 and 2 modes improves the precision of the stellar parameters, albeit marginally, compared to adopting radial modes alone, with 1.9%–3.0% (radius), 5%–9% (mass), and 19%–25% (age) reached when using all p-dominated modes as constraints. Given the very small spacing of adjacent dipole mixed modes in evolved RGB stars, the sparse set of observed g-dominated modes is not able to provide extra constraints, further leading to highly multimodal posteriors. Access to multiyear time-series photometry does not improve matters, with detailed modeling of evolved RGB stars based on (lower-resolution) TESS data sets attaining a precision commensurate with that based on end-of-mission Kepler data. Furthermore, we test the impact of varying the atmospheric boundary condition in our stellar models. We find the mass and radius estimates to be insensitive to the description of the near-surface layers, at the expense of substantially changing both the near-surface structure of the best-fitting models and the values of associated parameters like the initial helium abundance, Y i . Attempts to measure Y i from seismic modeling of red giants may thus be systematically dependent on the choice of atmospheric physics.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Yu, Pan Deng, Junting Liu, Xiaofeng Jia, and Mulan Wang. "Causal Conditional Hidden Markov Model for Multimodal Traffic Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4929–36. http://dx.doi.org/10.1609/aaai.v37i4.25619.

Full text
Abstract:
Multimodal traffic flow can reflect the health of the transportation system, and its prediction is crucial to urban traffic management. Recent works overemphasize spatio-temporal correlations of traffic flow, ignoring the physical concepts that lead to the generation of observations and their causal relationship. Spatio-temporal correlations are considered unstable under the influence of different conditions, and spurious correlations may exist in observations. In this paper, we analyze the physical concepts affecting the generation of multimode traffic flow from the perspective of the observation generation principle and propose a Causal Conditional Hidden Markov Model (CCHMM) to predict multimodal traffic flow. In the latent variables inference stage, a posterior network disentangles the causal representations of the concepts of interest from conditional information and observations, and a causal propagation module mines their causal relationship. In the data generation stage, a prior network samples the causal latent variables from the prior distribution and feeds them into the generator to generate multimodal traffic flow. We use a mutually supervised training method for the prior and posterior to enhance the identifiability of the model. Experiments on real-world datasets show that CCHMM can effectively disentangle causal representations of concepts of interest and identify causality, and accurately predict multimodal traffic flow.
APA, Harvard, Vancouver, ISO, and other styles
4

Posselt, Derek J., and Craig H. Bishop. "Nonlinear Parameter Estimation: Comparison of an Ensemble Kalman Smoother with a Markov Chain Monte Carlo Algorithm." Monthly Weather Review 140, no. 6 (June 1, 2012): 1957–74. http://dx.doi.org/10.1175/mwr-d-11-00242.1.

Full text
Abstract:
Abstract This paper explores the temporal evolution of cloud microphysical parameter uncertainty using an idealized 1D model of deep convection. Model parameter uncertainty is quantified using a Markov chain Monte Carlo (MCMC) algorithm. A new form of the ensemble transform Kalman smoother (ETKS) appropriate for the case where the number of ensemble members exceeds the number of observations is then used to obtain estimates of model uncertainty associated with variability in model physics parameters. Robustness of the parameter estimates and ensemble parameter distributions derived from ETKS is assessed via comparison with MCMC. Nonlinearity in the relationship between parameters and model output gives rise to a non-Gaussian posterior probability distribution for the parameters that exhibits skewness early and multimodality late in the simulation. The transition from unimodal to multimodal posterior probability density function (PDF) reflects the transition from convective to stratiform rainfall. ETKS-based estimates of the posterior mean are shown to be robust, as long as the posterior PDF has a single mode. Once multimodality manifests in the solution, the MCMC posterior parameter means and variances differ markedly from those from the ETKS. However, it is also shown that if the ETKS is given a multimode prior ensemble, multimodality is preserved in the ETKS posterior analysis. These results suggest that the primary limitation of the ETKS is not the inability to deal with multimodal, non-Gaussian priors. Rather it is the inability of the ETKS to represent posterior perturbations as nonlinear functions of prior perturbations that causes the most profound difference between MCMC posterior PDFs and ETKS posterior PDFs.
APA, Harvard, Vancouver, ISO, and other styles
5

Karkhaneh, Reza, Ahmad Masoumi, Nazanin Ebrahimiadib, Hormoz Chams, and Mojtaba Abrishami. "Multimodal imaging in posterior microphthalmos." Journal of Current Ophthalmology 31, no. 3 (September 2019): 335–38. http://dx.doi.org/10.1016/j.joco.2019.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ba, Yuming, Jana de Wiljes, Dean S. Oliver, and Sebastian Reich. "Randomized maximum likelihood based posterior sampling." Computational Geosciences 26, no. 1 (December 20, 2021): 217–39. http://dx.doi.org/10.1007/s10596-021-10100-y.

Full text
Abstract:
AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Ramandeep, Uday Tekchandani, Bruttendu Moharana, and Ankur Singh. "Multimodal imaging in a case of posterior microphthalmos." Indian Journal of Ophthalmology - Case Reports 1, no. 3 (2021): 422. http://dx.doi.org/10.4103/ijo.ijo_3474_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boulter, Daniel J., Marco Luigetti, Zoran Rumboldt, Julio A. Chalela, and Alessandro Cianfoni. "Multimodal CT imaging of a posterior fossa stroke." Neurological Sciences 33, no. 1 (June 22, 2011): 215–16. http://dx.doi.org/10.1007/s10072-011-0652-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Forte, Raimondo, Florent Aptel, Audrey Feldmann, and Christophe Chiquet. "MULTIMODAL IMAGING OF POSTERIOR POLAR ANNULAR CHOROIDAL DYSTROPHY." Retinal Cases & Brief Reports 12, no. 1 (2018): 29–32. http://dx.doi.org/10.1097/icb.0000000000000400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gill, Jeff, and George Casella. "Dynamic Tempered Transitions for Exploring Multimodal Posterior Distributions." Political Analysis 12, no. 4 (2004): 425–43. http://dx.doi.org/10.1093/pan/mph027.

Full text
Abstract:
Multimodal, high-dimension posterior distributions are well known to cause mixing problems for standard Markov chain Monte Carlo (MCMC) procedures; unfortunately such functional forms readily occur in empirical political science. This is a particularly important problem in applied Bayesian work because inferences are made from finite intervals of the Markov chain path. To address this issue, we develop and apply a new MCMC algorithm based on tempered transitions of simulated annealing, adding a dynamic element that allows the chain to self-tune its annealing schedule in response to current posterior features. This important feature prevents the Markov chain from getting trapped in minor modal areas for long periods of time. The algorithm is applied to a probabilistic spatial model of voting in which the objective function of interest is the candidate's expected return. We first show that such models can lead to complex target forms and then demonstrate that the dynamic algorithm easily handles even large problems of this kind.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multimodal Posteriors"

1

Guerrier, Laura. "Substrats cognitifs et neuronaux de l'anosognosie dans la maladie d'Alzheimer typique et atypique : étude en neuropsychologie et imagerie multimodale." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30190.

Full text
Abstract:
La non-conscience de ses propres symptômes est un phénomène fréquemment retrouvé dans la maladie d'Alzheimer (MA). Ce phénomène appelé anosognosie, principalement reporté dans la forme typique de la MA, implique une absence de conscience de ses propres difficultés cognitives pouvant conduire d'une part, à des situations de mise en danger de la part du patient, et d'autre part, à un retard de diagnostic. L'origine de ce phénomène étant encore débattue, trois études se sont ici portées sur l'étude des substrats cognitifs et neuronaux de ce phénomène dans la forme typique de la maladie d'Alzheimer. Les investigations menées en imagerie structurelle et métabolique ont pu mettre en évidence une altération du cortex cingulaire antérieur dorsal en lien avec ce phénomène d'anosognosie. L'étude fonctionnelle a, quant à elle, pu mettre en évidence une diminution de la connectivité entre le précunéus et le cortex cingulaire antérieur prégénual. Ces régions clés du réseau par défaut permettent la mise à jour du soi, maintenant ainsi un sentiment de continuité au cours du temps. Au sein d'une quatrième étude, le lien entre la méconnaissance des troubles et le retard de diagnostic a été investigué dans l'atrophie corticale postérieure. Nous avons ainsi pu mettre en évidence que la plainte visuelle et gestuelle émise par le patient ne serait pas le reflet exact de ses déficits, tant sur le plan cognitif que sur le plan métabolique. Il semblerait que dans cette forme atypique de la maladie d'Alzheimer, les patients aient également des difficultés à caractériser pleinement leurs déficits
Lack of awareness of one's own symptoms is a phenomenon frequently found in Alzheimer's disease (AD). This phenomenon called anosognosia, mainly reported in the typical form of AD, implies a lack of awareness of one's own cognitive difficulties that can lead to situations of danger on the part of the patient, on the one hand, and a delayed diagnosis on the other hand. As the origin of this phenomenon is still under debate, in the current research three studies have focused on the study of the cognitive and neural substrates of this phenomenon in the typical form of Alzheimer's disease. Investigations carried out in structural and metabolic imaging have revealed an alteration of the anterior dorsal cingulate cortex related to anosognosia. The functional study revealed a decrease in connectivity between the precuneus and the pregenual anterior cingulate cortex, key regions of the network by default allowing the self to be updated and thus maintaining a sense of continuity over time. In a fourth study, the link between lack of knowledge of the disorders and delayed diagnosis was investigated in posterior cortical atrophy. It would seem that in this atypical form of Alzheimer's disease, patients also have difficulty fully characterizing their deficits
APA, Harvard, Vancouver, ISO, and other styles
2

Trivedi, Neeta. "Robust, Energy‐efficient Distributed Inference in Wireless Sensor Networks With Applications to Multitarget Tracking." Thesis, 2014. https://etd.iisc.ac.in/handle/2005/4569.

Full text
Abstract:
The Joint Directors of Laboratories (JDL) data fusion model is a functional and comprehensive model for data fusion and inference process and serves as a common frame of reference for fusion technologies and algorithms. However, in distributed data fusion (DDF), since a node fuses the data locally available to it and the data arriving at it from the network, the framework by which the inputs arrive at a node must be part of the DDF problem, more so when the network starts becoming an overwhelming part of the inference process, like in wireless sensor networks (WSN). The current state of the art is the advancement as the result of parallel efforts in the constituent technology areas relating to the network or architecture domain and the application or fusion domain. Each of these disciplines is an evolving area requiring concentrated efforts to reach the Holy Grail. However, the most serious gap exists in the linkages within and across the two domains. This goal of this thesis is to investigate how the architectural issues can be crucial to maintaining provably correct solutions for distributed inference in WSN, to examine the requirements of networking structure for multitarget tracking in WSN as the boundaries get pushed in terms of target signature separation, sensor location uncertainties, reporting structure changes, and energy scarcity, and to propose robust and energy-efficient solutions for multitarget tracking in WSN. The findings point to an architecture that is achievable given today’s technology. This thesis shows the feasibility of using this architecture for efficient integrated execution of the architecture domain and the fusion domain functionality. Specific contributions in the areas of architecture domain include optimal lower bound on energy required for broadcast to a set of nodes, a QoS- and resource-aware broadcast algorithm, and a fusion-aware converge cast algorithm. The contributions in fusion domain include the following. Extension to the JDL model is proposed that accounts for DDF. Probabilistic graphical models are introduced with the motivation of balancing computation load and communication overheads among sensor nodes. Under the assumption that evidence originates from sensor nodes and a large part of inference must be drawn locally, the model allows mapping of inference responsibilities to sensor nodes in distributed manner. An algorithm formulating the problem of maximum a posteriori state estimate from general multimodal posterior as constrained nonlinear optimization problem, and an error estimate for indicating actionable confidence in this state are proposed. A DBN-based framework iMerge is proposed that models the overlap of signal energies from closely spaced targets for adding robustness to data association. iConsensus, a lightweight approach to network management and distributed tracking, and iMultitile, a method to trade off the cost of managing and propagating the particles with desired accuracy limits are also proposed. iSLAT, a distributed, lightweight smoothing algorithm for simultaneous localization and multitarget tracking is discussed. iSLAT uses the well-known RANSAC algorithm for approximation of the joint posterior densities.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwan, Teresa. "Role of the posterior parietal cortex in multimodal spatial behaviours." Thesis, 1994. http://hdl.handle.net/2429/5371.

Full text
Abstract:
The posterior parietal cortex (PPC) is a cortical region receiving inputs from different sensory modalities which has been shown to subserve a visuospatial function. The potential contribution of PPC in audiospatial behaviours and recognition of amodal spatial correspondences were postulated and assessed in the present study. Adult male Long- Evans rats received PPC lesions by aspiration, and they were compared to sham operated control rats on three behavioural tasks. In the Morris water maze, the rats had to learn to use the distal visual cues to locate an escape platform hidden in the pool. In an open field task, the rats were assessed on their reactions to a spatial relocation of a visual or an auditory object. In a spatial cross-modal transfer (CMT) task (Tees & Buhrmann, 1989), rats were trained to respond to light signals using spatial rules, and were then subjected to transfer tests using comparable sound signals. Results from the Morris water maze, the open field, and the initial training phase of the spatial CMT task confirmed a visuospatial deficit in PPC lesioned rats. However, if given sufficient training, PPC lesioned rats could learn the location of a hidden platform in the Morris water maze, and they could also acquire spatial rules in the CMT task. Such results indicated that the visuospatial deficits in PPC lesioned rats were less severe than previously thought. On the other hand, a persistent navigational difficulty characterized by a looping pattern of movement was observed in the PPC lesioned rats in the Morris water maze. Results from the open field indicated that PPC was less involved in audiospatial behaviours. Moreover, results also indicated that PPC was not necessary for spatial CMT. Hence, data from the present study did not support the idea that PPC played an essential role in supramodal spatial abilities in the rats. Instead, data from the spatial CMT task seemed to imply a role of PPC in managing conflicting spatial information coming from different sensory modalities.
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Rochelle Ann Costa. "A multimodal approach to distinguish MCI-C from MCI-NC subjects." Master's thesis, 2016. http://hdl.handle.net/10451/25807.

Full text
Abstract:
Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
A Doença de Alzheimer (AD, do inglês Alzheimer's Disease) é uma doença neurodegenerativa com crescente prevalência que afecta pessoas com idade mais avançada, habitualmente superior a 65 anos, e constitui entre 60-80% de todos os casos de demência. Provoca uma progressiva degradação dos neurónios e disfunção das sinapses, que constituem a região de ligação entre neurónios. Acredita-se que estas alterações sejam consequentes da acumulação de placas da proteína beta-amilóide no meio extracelular e de alterações anormais na proteína tau no meio intracelular. Consequentemente, com a progressão da doença, o doente começa a manifestar perda de memoria, dificuldade em formular pensamentos e alterações do comportamento, chegando a um estado em que se repercute nas atividades da vida diária. Atualmente, não existe cura para a AD, apenas alguns tratamentos que podem ser feitos para tentar retardar os sintomas e o declínio cognitivo. Estes conseguem ser mais eficazes nas primeiras fases da doença evitando assim piores condições de vida para os doentes. Como geralmente o diagnóstico da AD é tardio, a eficácia dos tratamentos disponíveis torna-se ainda mais limitada. Neste contexto, a doença de Alzheimer é vista como um problema de saúde pública com elevado impacto económico, tendo sido identificada como uma prioridade na investigação atual. Muitos estudos têm como principal objetivo a deteção precoce da AD, para que os tratamentos possam ser usados com a devida antecedência, sendo mais benéficos para o doente. Neste sentido, existe interesse no estudo do défice cognitivo ligeiro (MCI, do inglês: Mild Cognitive Impairment), visto que é considerado como um estado prodrómico da doença de Alzheimer, ou seja, doentes com MCI apresentam sintomas que podem indicar o início de AD antes que os sintomas mais específicos da doença surjam. No entanto, nem todos os casos de MCI desenvolvem AD, alguns permanecem estáveis ou podem reverter o declínio cognitivo. Deste modo, tem especial importância conseguir distinguir sujeitos com MCI que poderão converter (MCI-C), num determinado espaço de tempo, dos que não irão desenvolver a doença, ou seja, os MCI não conversores (MCI-NC). Diversos métodos de aprendizagem automática que aplicam algoritmos de inteligência artificial têm sido utilizados para reconhecer padrões nos dados obtidos através de técnicas ou exames médicos. Pretende-se encontrar padrões nos dados relacionados com a doença e alcançar um diagnóstico precoce confiável, através de classificações com elevada precisão obtidas por estes algoritmos. A combinação dos dados médicos com a inteligência artificial deu origem a uma tecnologia interdisciplinar, a que se dá o nome de diagnóstico auxiliado por computador (CAD, do inglês: Computer-Aided Diagnosis). Nos exercícios de CAD, em particular quando se usam técnicas de neuroimagem, para a criação um modelo de classificação são definidas normalmente cinco etapas: o pré-processamento das imagens, a extração de características, a seleção de características, a classificação e a finalmente avaliação do desempenho do classificador. O pré-processamento pode envolver várias fases, sendo essencialmente usado para eliminar a presença de ruído e heterogeneidades e fazer o alinhamento das imagens. Tanto a extração como a seleção das características permitem reduzir o problema da elevada dimensionalidade existente nas neuroimagens, que advém do excessivo número de voxels/características presentes em cada imagem. Os exames médicos disponíveis para facilitar o diagnóstico da AD são diversos e incluem exames de neuroimagem, análises laboratoriais, testes genéticos e neurofisiológicos. Neste trabalho, foram usadas duas modalidades de imagem que em estudos anteriores provaram ser vantajosas para o diagnóstico da AD: a Tomografia por Emissão de Positrões 8FFluorodesoxiglucose (FDG-PET, do inglês: fluorodeoxyglucose Positron Emission Tomography) que permite detetar hipometabolismo nas regiões afetadas pela doença, e as imagens estruturais de Ressonância Magnética (sMRI, do ingl^es structural Magnetic Resonance Imaging) que permitem detetar perda de volume do tecido cerebral. Ao juntar a informação destas duas modalidades, é possível fornecer ao classificador diferentes tipos de informação, funcional e estrutural, podendo alcançar previsões mais precisas. Por conseguinte, estas técnicas foram testadas individualmente, mas também numa abordagem multimodal. Para evitar o elevado número de voxels/características presentes nas imagens, determinados estudos usam apenas certas regiões do cérebro. No entanto, foi preferida a abordagem em que todos os voxels/características do cérebro são usados para não limitar o estudo apenas a determinadas zonas. Para selecionar as regiões mais relevantes de todo o cérebro e diminuir o problema da dimensionalidade foram usados dois métodos de seleção de características: o LASSO, para o caso em que se usou cada modalidade individualmente, e o group LASSO multi-task, no caso multimodal. O classificador mais utilizado para estudos de AD é a máquina de vetores de suporte (SVM, do inglês: Support Vector Machine). Este classificador é apelativo por se adequar a problemas de elevada dimensionalidade e apresentar bons resultados. No entanto, SVM é um classificador não-probabilístico, ou seja, devolve apenas a classe que prevê para um determinado teste e não uma probabilidade associada. Numa perspectiva clínica, seria mais vantajoso ter uma medida de confiança quanto à previsão feita pelo classificador. Recentemente, foram introduzidos dois classificadores que devolvem probabilidades à posteriori: o Processo Gaussiano (GP, do inglês Gaussian Process) e a Regressão logística (LR, do inglês Logistic Regression). Porem, ainda não foram muito explorados em estudos de AD, especialmente em relação às suas probabilidades à posteriori. Neste âmbito, com a presente tese testaram-se três classificadores (SVM, GP e LR), numa perspectiva multimodal, que junta dados FDG-PET e sMRI da base de dados Alzheimer's Disease Neuroimaging Initiative (ADNI), bem como numa abordagem usando as modalidades individualmente. Estes classfi_cadores foram utilizados em quatro testes de classificação diferentes, nomeadamente, para distinguir: AD de sujeitos com idades avançadas e cognição normal (CN); AD de MCI; CN de MCI e com maior interesse os MCI-C de MCI-NC, num período de tempo de conversão 24 meses. A partir dos resultados obtidos foi possível verificar que tanto o GP como o LR apresentaram resultados de classificação melhores que o SVM, para os casos AD vs CN, AD vs MCI e CN vs MCI. No entanto, na classificação verdadeiramente pertinente em termos científicos, ou seja, quando se testou MCI-C vs MCI-NC, o SVM revelou melhores resultados, sendo que o LR não ficou muito abaixo do SVM, já o GP teve uma performance inferior. É importante salientar que o GP apresentou vantagens em relação às probabilidades à posteriori exibidas pelo LR, visto que demonstrou mais confiança nas previsões feitas, enquanto o LR apresentou probabilidades à posteriori mais próximas do limiar entre a escolha de pertencer a uma classe ou outra. Com esta diferença foi possível demostrar a relevância de ter em consideração a análise das probabilidades à posteriori, em vez de se limitar à analise da precisão do classificador. Em relação ao número de características usadas, o LR necessitou um maior número em comparação ao GP ou SVM, apesar disso, não revelou ter um custo computacional superior aos outros dois classificadores. Quanto aos métodos de seleção de características, LASSO e group LASSO multi-task, destacase que ambos foram eficientes em diminuir o número de características e selecionaram regiões pertinentes, como o hipocampo, amigdala, tálamo, putamen e ventrículo lateral, que estão de acordo com as regiões detectadas em estudos anteriores. Em alguns casos, a abordagem multimodal não revelou ser superior aos resultados obtidos usando as modalidades individualmente. Não obstante, para a distinção entre MCI-C vs MCINC, independentemente do classificador usado, os resultados foram melhores aos obtidos quando se usou as modalidades individualmente. Assim demonstra-se que uma abordagem multimodal apresenta vantagens para diferenciar estes dois grupos de sujeitos.
Alzheimer's Disease (AD) is one of the most common neurodegenerative diseases, affecting 60-80% from all dementia cases. Unfortunately, the cure for AD is still not known and only some treatments can be done in its early stages to slow up the symptoms and cognitive decline, avoiding worst patients' living conditions. As most of the AD diagnoses are late, it increases the difficulty of applying the strategies and treatments available. Therefore, current studies aim at detecting AD at an early stage. For this purpose, they are studying mild cognitive impairment (MCI) subjects, as this is normally the first condition before developing AD. Nonetheless, not all MCI patients convert to AD, some remain stable or even may reverse the cognitive decline. In this sense, being able to distinguish between MCI-converters (MCI-C) and MCI-non converters (MCI-NC) reveals a quite important task. In order to distinguish between these and other groups of subjects many classifiers can be used. Classifiers are machine learning algorithms which apply artificial intelligence. These are extremely useful to identify patterns in, for example, medical brain images, to find disease related patterns and try to achieve an early and reliable diagnosis. The Support Vector Machine (SVM) is a widely used classifier for AD studies and is very appealing as it deals well with high-dimensional problems, which is present when using neuroimages because of the high number of voxels in each image. Nonetheless, SVM is a non-probabilistic classifier and only provides the class predicted for a given test. In a clinical perspective, it would be advantageous to also have a confidence level about the prediction made, to avoid diagnosis being hampered by overconfidence. Hence, of late the interest in probabilistic classifiers is rising. The Logistic Regression (LR) and the Gaussian Process (GP) are examples of probabilistic classifiers, but few studies used these methods to present results for AD classification, additionally the analysis of the posterior probability given by these classifiers is also still not well explored. In this context, this thesis proposes the comparison of the performance of probabilistic (LR and GP) and non-probabilistic (SVM) classifiers for AD context with special interest in reaching good results for MCI-C vs MCI-NC. These tests were done using two neuroimaging modalities: the deoxyglucose Positron Emission Tomography (FDG-PET) and structural Magnetic Resonance Imaging (sMRI), in single modal and multimodal approach. A whole-brain approach was chosen, to avoid restringing the model just for certain brain regions. For feature selection methods, the LASSO and group LASSO with L1=L2 regularization, for both single and multimodality cases, were used respectively. Four different binary classification tests involving AD, MCI and elderly cognitive normal (CN) subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, were performed: AD vs CN, AD vs MCI, CN vs MCI and MCI-C vs MCI-NC with a conversion period of 24 months. The results demonstrated the advantage of using GP and LR as they can achieve state-of-the art classification results and be better than SVM, in most cases, while providing posterior probabilities that will help evaluate how confident the classifier is on its predictions. However, to distinguish MCI-C and MCI-NC, SVM seemed to get better results, with LR being just a little worse than SVM. The posterior probabilities from GP attracted more attention, because they demonstrated higher confidence in results, whereas LR posterior probabilities were mostly near the threshold value, meaning that the class is not chosen with a lot of confidence. Although the multimodal approach did not show always the best results, for the MCI-C vs MCI-NC classification it outperformed the single modality results, independently of the classifier used. Thus, exhibits that is useful to joint information of different modalities to help distinguish between MCI-C and MCI-NC.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multimodal Posteriors"

1

Cheng, Russell. Finite Mixture Models. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0017.

Full text
Abstract:
Fitting a finite mixture model when the number of components, k, is unknown can be carried out using the maximum likelihood (ML) method though it is non-standard. Two well-known Bayesian Markov chain Monte Carlo (MCMC) methods are reviewed and compared with ML: the reversible jump method and one using an approximating Dirichlet process. Another Bayesian method, to be called MAPIS, is examined that first obtains point estimates for the component parameters by the maximum a posteriori method for different k and then estimates posterior distributions, including that for k, using importance sampling. MAPIS is compared with ML and the MCMC methods. The MCMC methods produce multimodal posterior parameter distributions in overfitted models. This results in the posterior distribution of k being biased towards high k. It is shown that MAPIS does not suffer from this problem. A simple numerical example is discussed.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multimodal Posteriors"

1

Afridi, Rubbia, Aniruddha Agarwal, Mohammad Ali Sadiq, Muhammad Hassan, Diana V. Do, Quan Dong Nguyen, and Yasir Jamal Sepah. "Fundus Autofluorescence Imaging in Posterior Uveitis." In Multimodal Imaging in Uveitis, 69–85. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-23690-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moriyama, Muka. "Multimodal Imaging of Posterior Staphyloma." In Atlas of Pathologic Myopia, 41–45. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4261-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ketabdar, Hamed, Hervé Bourlard, and Samy Bengio. "Hierarchical Multi-stream Posterior Based Speech Recognition System." In Machine Learning for Multimodal Interaction, 294–306. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11677482_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karafiát, Martin, Frantiśek Grézl, Petr Schwarz, Lukáš Burget, and Jan Černocký. "Robust Heteroscedastic Linear Discriminant Analysis and LCRC Posterior Features in Meeting Data Recognition." In Machine Learning for Multimodal Interaction, 275–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11965152_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Herbst, Edward P., and Frank Schorfheide. "Sequential Monte Carlo Methods." In Bayesian Estimation of DSGE Models. Princeton University Press, 2015. http://dx.doi.org/10.23943/princeton/9780691161082.003.0005.

Full text
Abstract:
This chapter analyzes Sequential Monte Carlo (SMC) algorithms and how they were initially developed to solve filtering problems that arise in nonlinear state–space models. The first paper that applied SMC techniques to posterior inference in DSGE models is Creal (2007). Herbst and Schorfheide (2014) developed the algorithm further, provided some convergence results for an adaptive version of the algorithm, and showed that a properly tailored SMC algorithm delivers more reliable posterior inference for largescale DSGE models with multimodal posteriors than the widely used RMWHV algorithm. An additional advantage of the SMC algorithms over MCMC algorithms, on the computational front, highlighted by Durham and Geweke (2014), is that SMC is much more amenable to parallelization.
APA, Harvard, Vancouver, ISO, and other styles
6

de los Santos, Cristian, Lidia Cocho, and José María Herreras. "Multimodal Imaging of White Dot Syndromes." In Eye Diseases - Recent Advances, New Perspectives and Therapeutic Options [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.106467.

Full text
Abstract:
White dot syndromes are an uncommon group of posterior uveitis affecting the outer retina, retinal pigment epithelium, choriocapillaris, and/or choroidal stroma. Multimodal imaging, including fundus fluorescein angiography, indocyanine green angiography, autofluorescence, and optical coherence tomography angiography, has improved our understanding regarding their pathophysiology, helping us to rename or even regroup some of these disorders as one disease in opposition to the historical description. It also provides useful information to evaluate disease activity and monitor response to treatment. This chapter will review the different findings on multimodal imaging of these heterogenous disorders and classify them according to their primary anatomic involvement.
APA, Harvard, Vancouver, ISO, and other styles
7

Schetinin, V., and L. Jakaite. "Assessment and Confidence Estimates of Newborn Brain Maturity from Sleep EEG." In E-Health Technologies and Improving Patient Safety: Exploring Organizational Factors, 215–26. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2657-7.ch014.

Full text
Abstract:
Electroencephalograms (EEGs) recorded from sleeping newborns contain information about their brain maturity. Although these EEGs are very weak and distorted by artifacts, and widely vary during sleep hours as well as between patients, the main maturity-related patterns are recognizable by experts. However, experts are typically incapable of quantitatively providing accurate estimates of confidence in assessments. The most accurate estimates are, in theory, provided with the Bayesian methodology of probabilistic inference which has been practically implemented with Markov Chain Monte Carlo (MCMC) integration over a model parameter space. Typically this technique aims to approximate the integral by sampling areas of interests with high likelihood of the true model. In practice, the likelihood distributions are typically multimodal, and for this reason, the existing MCMC techniques have been shown incapable of providing the proportional sampling of multiple areas of interest. Besides, the lack of prior information increases this problem especially for a large model parameter space, making its detailed exploration impossible within a reasonable time. Specifically, the absence of information about EEG features has been shown affecting the results of the Bayesian assessment of EEG maturity. In this chapter, authors discuss how the posterior information can be used in order to mitigate the problem of disproportional sampling in order to improve the accuracy of assessments. Having analyzed the posterior information, they found that the MCMC integration tends to oversample the areas in which a model parameter space includes EEG features making a weak contribution to the assessment. This observation has motivated the authors to cure the results of MCMC integration, and when they tested the proposed method on the EEG recordings, they found an increase in the accuracy of assessment.
APA, Harvard, Vancouver, ISO, and other styles
8

Bar-Sela, Shai M. "Bleb-Related Vision Loss." In Complications of Glaucoma Surgery. Oxford University Press, 2013. http://dx.doi.org/10.1093/oso/9780195382365.003.0033.

Full text
Abstract:
Trabeculectomy is an effective procedure to control intraocular pressure (IOP) and to prevent progression of vision loss. One of the risks associated with this procedure is oversized and exuberant blebs, which may result in reduction of visual acuity. Understanding the mechanisms and prognosis of this complication is important for evaluating and selecting the proper treatment. Large blebs overhanging the cornea may cause visual acuity loss if they directly obstruct the visual axis, but they can also be problematic due to their effect on lid movements and resultant drying of the cornea. Furthermore, the overhanging, “beer-belly” bleb can also induce corneal dryness as well as irregular astigmatism. The trabeculectomy surgical technique itself may also affect the development of oversized blebs. Some authors believe that fornix-based conjunctival flaps result in more diffuse and less elevated blebs that are less likely to encroach on the limbus, compared to limbus-based conjunctival flaps. Limbusbased conjunctival flaps are limited by scar formation at the conjunctival wound site, preventing posterior movement of aqueous and forcing bleb elevation toward the limbus. The use of antifibrotics, such as mitomycin-C and 5-fluorouracil, during filtering procedures may predispose to the development of larger ischemic blebs. Thin-walled ischemic blebs may continue to enlarge months to years postoperatively as the bleb wall constantly remodels. Various laser treatments can be used to contract oversized blebs. Fink et al used argon laser photocoagulation to shrink large blebs in 4 eyes; however 2 eyes developed leaks. Sony et al treated 3 eyes with large blebs using frequency-doubled Nd:YAG photocoagulation after painting the area of the blebs with gentian violet to enhance the laser absorption. Several treatment sessions resulted in bleb shrinkage and remodeling. Lynch et al applied a continuous wave multimode Nd:YAG laser in 4 eyes with symptomatic large blebs, 3 of which had undergone previous trabeculectomy with antifibrotic agents. Two eyes required retreatment, and one eye developed a bleb leak afterward. These reports indicate that laser application success has been limited, and bleb leaks may occur.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multimodal Posteriors"

1

Lu, Ting, Monica F. Bugallo, and Petar M. Djuric. "Simplified Marginalized Particle Filtering for Tracking Multimodal Posteriors." In 2007 IEEE/SP 14th Workshop on Statistical Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/ssp.2007.4301261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Madhyastha, Pranava Swaroop, Josiah Wang, and Lucia Specia. "Sheffield MultiMT: Using Object Posterior Predictions for Multimodal Machine Translation." In Proceedings of the Second Conference on Machine Translation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-4752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Das, Samarjit, and Namrata Vaswani. "Efficient Importance Sampling Techniques for Large Dimensional and Multimodal Posterior Computations." In 2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop. IEEE, 2009. http://dx.doi.org/10.1109/dsp.2009.4785934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Bingxin, Monica F. Bugallo, and Petar M. Djuric. "Estimation of multimodal posterior distributions of chirp parameters with population Monte Carlo sampling." In ICASSP 2012 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2012. http://dx.doi.org/10.1109/icassp.2012.6288760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Bowon, and Kar-Han Tan. "Maximum a posteriori Multimodal 3D Object Localization With a Depth Sensor and Stereo Microphones." In 2nd International ICST Conference on Immersive Telecommunications. ICST, 2009. http://dx.doi.org/10.4108/icst.immerscom2009.6235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Das, Abhijit, Umapada Pal, Miguel A. Ferrer, and Michael Blumenstein. "A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability." In 2017 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2017. http://dx.doi.org/10.1109/btas.2017.8272772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Shumao, Fahim Forouzanfar, and Xiao-Hui Wu. "Stein Variational Gradient Descent for Reservoir History Matching Problems." In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212190-ms.

Full text
Abstract:
Abstract Reservoir history matching problem estimates the system (i.e., reservoir morel) parameters based on noisy observed data. Examples can be estimating the permeability and porosity fields from time series of oil, water, and gas production rates. The estimation of parameters is formulated in the form of estimating their probability distributions; it is a required step for reservoir management operation and planning under subsurface uncertainty. The Bayesian framework is commonly used to estimate the posterior distribution of parameters, which may contain multiple modes that correspond to distinct reservoir scenarios. Here, we study the application of Stein Variational Gradient Descent (SVGD) method, originally proposed by Liu & Wang (2016), in reservoir history matching problems. The rationale and mechanics of SVGD method is discussed and the adaptation of this method to the reservoir characterization application is presented. More specifically, we propose to formulate the gradient-based SVGD method using stochastic gradients for reservoir history matching applications. To the best of our knowledge, this paper presents the first application of SVGD method for reservoir characterization problem. The utilization of stochastic approximation of gradients within a gradient-based SVGD is another novelty aspect of this work. The formulated algorithm is benchmarked using synthetic test problems with multimodal known posterior distributions. Also, the application of the proposed algorithm is investigated to solve synthetic and real history matching problems including the IC Fault model and an unconventional well simulation model. The reservoir test problems are further investigated to evaluate the method's performance in comparison with application of implementations of a Gauss Newton optimization and an iterative Ensemble Smoother method for sampling the posterior distribution. We show that the proposed implementation of SVGD can capture the posterior distribution and complicated geometry. For reservoir IC Fault test problem, the method effectively samples multiple modes. For the unconventional test problem, the samples are compared with the ones obtained using a Gauss Newton or iterative Ensemble Smoother methods.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multimodal Posteriors"

1

Bisang, Roberto, Jeremías Lachman, Andrés López, Martin Pereyra, and Ezequiel Tacsir. Reinserción internacional y apertura de nuevos mercados de la cadena bovina en Argentina y Uruguay: nuevas formas de institucionalidad y esquemas de cooperación público-privados. Inter-American Development Bank, March 2022. http://dx.doi.org/10.18235/0004102.

Full text
Abstract:
La carne bovina ocupó un rol icónico en el comercio exterior argentino y uruguayo hasta los años 70 cuando diversas restricciones en los mercados externos afectaron severamente los flujos comerciados. En décadas posteriores ambos países establecieron diversas políticas productivas, tecnológicas y comerciales- para mejorar su competitividad y reinsertarse en los ahora renovados- mercados internacionales. En las últimas décadas se afianzaron otros países exportadores de carnes EEUU, Brasil, Paraguay, Australia- a la vez que se reconfiguraron los perfiles de los compradores (los mercados de comunidad -carnes kosher y halal-, las cadenas de hoteles/restaurantes/catering) y las modalidades comerciales (transporte multimodal, e-commerce). La reciente irrupción de China con ingentes volúmenes de compras y claro liderazgo comprador consolida una nueva estructura del mercado mundial de carnes bovinas caracterizado centralmente por una demanda creciente. A partir de estructuras productivas diferentes, Argentina y Uruguay enfrentan estos desafíos con el objetivo común de capturar parte del dinamismo del renovado mercado internacional de carnes. Desarrollaron, durante décadas, una red de instituciones y políticas -institutos de promoción y regulación, programas específicos, ámbitos de coordinación- destinadas a favorecer la actividad exportadora, sin descuidar sus respectivos mercados internos. El trabajo analiza la estructura y el funcionamiento de estas iniciativas y las respuestas empresarias materializadas a través de diversas estrategias exportadoras: desde accesos a mercados masivos en base a industrias frigoríficas integradas cuasi verticalmente hasta a la conformación de redes productivas entre ganaderos, proveedores de servicios de faena y distribuidores- convenientemente coordinadas por empresas “anclas”, orientadas a nichos de mercados y productos diferenciados de alto valor agregado; se agrega una mención particular referida a la dinámica de firmas subsidiarias locales de conglomerados internacionales de la industria de las carnes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography