Teses / dissertações sobre o tema "A priori data"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "A priori data".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
SALAS, PERCY ENRIQUE RIVERA. "STDTRIP: AN A PRIORI DESIGN PROCESS FOR PUBLISHING LINKED DATA". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28907@1.
Texto completo da fonteCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A abordagem de Dados Abertos tem como objetivo promover a interoperabilidade de dados na Web. Consiste na publicação de informações em formatos que permitam seu compartilhamento, descoberta, manipulação e acesso por parte de usuários e outros aplicativos de software. Essa abordagem requer a triplificação de conjuntos de dados, ou seja, a conversão do esquema de bases de dados relacionais, bem como suas instâncias, em triplas RDF. Uma questão fundamental neste processo é decidir a forma de representar conceitos de esquema de banco de dados em termos de classes e propriedades RDF. Isto é realizado através do mapeamento das entidades e relacionamentos para um ou mais vocabulários RDF, usados como base para a geração das triplas. A construção destes vocabulários é extremamente importante, porque quanto mais padrões são utilizados, melhor o grau de interoperabilidade com outros conjuntos de dados. No entanto, as ferramentas disponíveis atualmente não oferecem suporte adequado ao reuso de vocabulários RDF padrão no processo de triplificação. Neste trabalho, apresentamos o processo StdTrip, que guia usuários no processo de triplificação, promovendo o reuso de vocabulários de forma a assegurar interoperabilidade dentro do espaço da Linked Open Data (LOD).
Open Data is a new approach to promote interoperability of data in the Web. It consists in the publication of information produced, archived and distributed by organizations in formats that allow it to be shared, discovered, accessed and easily manipulated by third party consumers. This approach requires the triplification of datasets, i.e., the conversion of database schemata and their instances to a set of RDF triples. A key issue in this process is deciding how to represent database schema concepts in terms of RDF classes and properties. This is done by mapping database concepts to an RDF vocabulary, used as the base for generating the triples. The construction of this vocabulary is extremely important, because the more standards are reused, the easier it will be to interlink the result to other existing datasets. However, tools available today do not support reuse of standard vocabularies in the triplification process, but rather create new vocabularies. In this thesis, we present the StdTrip process that guides users in the triplification process, while promoting the reuse of standard, RDF vocabularies.
Egidi, Leonardo. "Developments in Bayesian Hierarchical Models and Prior Specification with Application to Analysis of Soccer Data". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427270.
Texto completo da fonteNegli ultimi anni la sfida per la specificazione di nuove distribuzioni a priori e per l’uso di complessi modelli gerarchici è diventata ancora più rilevante all’interno dell’inferenza Bayesiana. L’avvento delle tecniche Markov Chain Monte Carlo, insieme a nuovi linguaggi di programmazione probabilistici, ha esteso i confini del campo, sia in direzione teorica che applicata. Nella presente tesi ci dedichiamo a obiettivi teorici e applicati. Nella prima parte proponiamo una nuova classe di distribuzioni a priori che dipendono dai dati e che sono specificate tramite una mistura tra una a priori non informativa e una a priori informativa. La generica distribuzione appartenente a questa nuova classe fornisce meno informazione di una priori informativa e si candida a non dominare le conclusioni inferenziali quando la dimensione campionaria è piccola o moderata. Tale distribuzione `e idonea per scopi di robustezza, specialmente in caso di scorretta specificazione della distribuzione a priori informativa. Alcuni studi di simulazione all’interno di modelli coniugati mostrano che questa proposta può essere conveniente per ridurre gli errori quadratici medi e per migliorare la copertura frequentista. Inoltre, sotto condizioni non restrittive, questa classe di distribuzioni d`a luogo ad alcune altre interessanti proprietà teoriche. Nella seconda parte della tesi usiamo la classe dei modelli gerarchici Bayesiani per prevedere alcune grandezze relative al gioco del calcio ed estendiamo l’usuale modellazione per i goal includendo nel modello un’ulteriore informazione proveniente dalle case di scommesse. Strumenti per sondare a posteriori la bontà di adattamento del modello ai dati mettono in luce un’ottima aderenza del modello ai dati in possesso, una buona calibrazione dello stesso e suggeriscono, infine, la costruzione di efficienti strategie di scommesse per dati futuri.
Tan, Rong Kun Jason. "Scalable Data-agnostic Processing Model with a Priori Scheduling for the Cloud". Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/75449.
Texto completo da fonteBussy, Victor. "Integration of a priori data to optimise industrial X-ray tomographic reconstruction". Electronic Thesis or Diss., Lyon, INSA, 2024. http://www.theses.fr/2024ISAL0116.
Texto completo da fonteThis thesis explores research topics in the field of industrial non-destructive testing (NDT) using X-rays. The application of CT tomography has significantly expanded, and its use has intensified across many industrial sectors. Due to increasing demands and constraints on inspection processes, CT must continually evolve and adapt. Whether in terms of reconstruction quality or inspection time, X-ray tomography is constantly progressing, particularly in the so-called sparse-view strategy. This strategy involves reconstructing an object using the minimum possible number of radiographic projections while maintaining satisfactory reconstruction quality. This approach reduces acquisition times and associated costs. Sparse-view reconstruction poses a significant challenge as the tomographic problem is ill-conditioned, or, as it is often described, ill-posed. Numerous techniques have been developed to overcome this obstacle, many of which rely on leveraging prior information during the reconstruction process. By exploiting data and knowledge available before the experiment, it is possible to improve reconstruction results despite the reduced number of projections. In our industrial context, for example, the computer-aided design (CAD) model of the object is often available, which provides valuable information about the geometry of the object under study. However, it is important to note that the CAD model only offers an approximate representation of the object. In NDT or metrology, it is precisely the differences between an object and its CAD model that are of interest. Therefore, integrating prior information is complex, as this information is often "approximate" and cannot be used as is. Instead, we propose to judiciously use the geometric information available from the CAD model at each step of the process. We do not propose a single method but rather a methodology for integrating prior geometric information during X-ray tomographic reconstruction
Skrede, Ole-Johan. "Explicit, A Priori Constrained Model Parameterization for Inverse Problems, Applied on Geophysical CSEM Data". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27343.
Texto completo da fonteKindlund, Andrée. "Inversion of SkyTEM Data Based on Geophysical Logging Results for Groundwater Exploration in Örebro, Sweden". Thesis, Luleå tekniska universitet, Geovetenskap och miljöteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85315.
Texto completo da fonteBeretta, Valentina. "évaluation de la véracité des données : améliorer la découverte de la vérité en utilisant des connaissances a priori". Thesis, IMT Mines Alès, 2018. http://www.theses.fr/2018EMAL0002/document.
Texto completo da fonteThe notion of data veracity is increasingly getting attention due to the problem of misinformation and fake news. With more and more published online information it is becoming essential to develop models that automatically evaluate information veracity. Indeed, the task of evaluating data veracity is very difficult for humans. They are affected by confirmation bias that prevents them to objectively evaluate the information reliability. Moreover, the amount of information that is available nowadays makes this task time-consuming. The computational power of computer is required. It is critical to develop methods that are able to automate this task.In this thesis we focus on Truth Discovery models. These approaches address the data veracity problem when conflicting values about the same properties of real-world entities are provided by multiple sources.They aim to identify which are the true claims among the set of conflicting ones. More precisely, they are unsupervised models that are based on the rationale stating that true information is provided by reliable sources and reliable sources provide true information. The main contribution of this thesis consists in improving Truth Discovery models considering a priori knowledge expressed in ontologies. This knowledge may facilitate the identification of true claims. Two particular aspects of ontologies are considered. First of all, we explore the semantic dependencies that may exist among different values, i.e. the ordering of values through certain conceptual relationships. Indeed, two different values are not necessary conflicting. They may represent the same concept, but with different levels of detail. In order to integrate this kind of knowledge into existing approaches, we use the mathematical models of partial order. Then, we consider recurrent patterns that can be derived from ontologies. This additional information indeed reinforces the confidence in certain values when certain recurrent patterns are observed. In this case, we model recurrent patterns using rules. Experiments that were conducted both on synthetic and real-world datasets show that a priori knowledge enhances existing models and paves the way towards a more reliable information world. Source code as well as synthetic and real-world datasets are freely available
Denaxas, Spiridon Christoforos. "A novel framework for integrating a priori domain knowledge into traditional data analysis in the context of bioinformatics". Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492124.
Texto completo da fonteKatragadda, Mohit. "Development of flame surface density closure for turbulent premixed flames based on a priori analysis of direct numerical simulation data". Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2195.
Texto completo da fonteRouault-Pic, Sandrine. "Reconstruction en tomographie locale : introduction d'information à priori basse résolution". Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005016.
Texto completo da fonteLiczbinski, Celso Antonio. "Classificação de dados imagens em alta dimensionalidade, empregando amostras semi-rotuladas e estimadores para as probabilidades a priori". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/12014.
Texto completo da fonteIn natural scenes there are some cases in which some of the land-cover classes involved are spectrally very similar, i.e., their first order statistics are nearly identical. In these cases, the more traditional sensor systems such as Landsat-TM and Spot, among others usually result in a thematic image low in accuracy. On the other hand, it is well known that high-dimensional image data allows for the separation of classes that are spectrally very similar, provided that their second-order statistics differ significantly. The classification of high-dimensional image data, however, poses some new problems such as the estimation of the parameters in a parametric classifier. As the data dimensionality increases, so does the number of parameters to be estimated, particularly in the covariance matrix. In real cases, however, the number of training samples available is usually limited preventing therefore a reliable estimation of the parameters required by the classifier. The paucity of training samples results in a low accuracy for the thematic image which becomes more noticeable as the data dimensionality increases. This condition is known as the Hughes Phenomenon. Different approaches to mitigate the Hughes Phenomenon investigated by many authors have been reported in the literature. Among the possible alternatives that have been proposed, the so called semi-labeled samples has shown some promising results in the classification of remote sensing high dimensional image data, such as AVIRIS data. In this dissertation the approach proposed by Lemos (2003) is further investigated to increase the reliability in the estimation of the parameters required by the Gaussian Maximum Likelihood (GML) classifier. In this dissertation, we propose a methodology to estimate the a priory probabilities P( i) required by the GMV classifier. It is expected that a more realistic estimation of the values for the a priory probabilities well help to increase the accuracy of the thematic image produced by the GML classifier. The experiments performed in this study have shown an increase in the accuracy of the thematic image, suggesting the adequacy of the proposed methodology.
Brard, Caroline. "Approche Bayésienne de la survie dans les essais cliniques pour les cancers rares". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS474/document.
Texto completo da fonteBayesian approach augments the information provided by the trial itself by incorporating external information into the trial analysis. In addition, this approach allows the results to be expressed in terms of probability of some treatment effect, which is more informative and interpretable than a p-value and a confidence interval. In addition, the frequent reduction of an analysis to a binary interpretation of the results (significant versus non-significant) is particularly harmful in rare diseases.In this context, the objective of my work was to explore the feasibility, constraints and contribution of the Bayesian approach in clinical trials in rare cancers with a primary censored endpoint. A review of the literature confirmed that the implementation of Bayesian methods is still limited in the analysis of clinical trials with a censored endpoint.In the second part of our work, we developed a Bayesian design, integrating historical data in the setting of a real clinical trial with a survival endpoint in a rare disease (osteosarcoma). The prior incorporated individual historical data on the control arm and aggregate historical data on the relative treatment effect. Through a large simulation study, we evaluated the operating characteristics of the proposed design and calibrated the model while exploring the issue of commensurability between historical and current data. Finally, the re-analysis of three clinical trials allowed us to illustrate the contribution of Bayesian approach to the expression of the results, and how this approach enriches the frequentist analysis of a trial
PERRA, SILVIA. "Objective bayesian variable selection for censored data". Doctoral thesis, Università degli Studi di Cagliari, 2013. http://hdl.handle.net/11584/266108.
Texto completo da fonteGay, Antonin. "Pronostic de défaillance basé sur les données pour la prise de décision en maintenance : Exploitation du principe d'augmentation de données avec intégration de connaissances à priori pour faire face aux problématiques du small data set". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0059.
Texto completo da fonteThis CIFRE PhD is a joint project between ArcelorMittal and the CRAN laboratory, with theaim to optimize industrial maintenance decision-making through the exploitation of the available sources of information, i.e. industrial data and knowledge, under the industrial constraints presented by the steel-making context. Current maintenance strategy on steel lines is based on regular preventive maintenance. Evolution of preventive maintenance towards a dynamic strategy is done through predictive maintenance. Predictive maintenance has been formalized within the Prognostics and Health Management (PHM) paradigm as a seven steps process. Among these PHM steps, this PhD's work focuses on decision-making and prognostics. The Industry 4.0 context put emphasis on data-driven approaches, which require large amount of data that industrial systems cannot ystematically supply. The first contribution of the PhD consists in proposing an equation to link prognostics performances to the number of available training samples. This contribution allows to predict prognostics performances that could be obtained with additional data when dealing with small datasets. The second contribution of the PhD focuses on evaluating and analyzing the performance of data augmentation when applied to rognostics on small datasets. Data augmentation leads to an improvement of prognostics performance up to 10%. The third contribution of the PhD consists in the integration of expert knowledge into data augmentation. Statistical knowledge integration proved efficient to avoid performance degradation caused by data augmentation under some unfavorable conditions. Finally, the fourth contribution consists in the integration of prognostics in maintenance decision-making cost modeling and the evaluation of prognostics impact on maintenance decision cost. It demonstrates that (i) the implementation of predictive maintenance reduces maintenance cost up to 18-20% and ii) the 10% prognostics improvement can reduce maintenance cost by an additional 1%
Ducros, Florence. "Maintien en conditions opérationnelles pour une flotte de véhicules : étude de la non stabilité des flux de rechange dans le temps". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS213/document.
Texto completo da fonteThis thesis gathers methodologicals contributions to simulate the need of replacement equipment for a vehile fleet. Systems degrade with age or use, and fail when they do not fulfill their mission. The user needs an assurance that the system is operational during its useful life. A support contract obliges the manufacturer to remedy a failure and to keep the system in operational condition for the duration of the MCO contract.The management of support contracts or the extension of support requires knowledge of the equipment lifetime and also the uses condition of vehicles, which depends on the customer. The analysis of customer returns or RetEx is then an important tool to help support the decision of the industrial. In reliability or warranty analysis, engineers must often deal with lifetimes data that are non-homogeneous. Most of the time, this variability is unobserved but has to be taken into account for reliability or warranty cost analysis.A further problem is that in reliability analysis, the data is heavily censored which makes estimations more difficult. We propose to consider the heterogeneity of lifetimes by a mixture and competition model of two Weibull laws. Unfortunately, the performance of classical estimation methods (maximum of likelihood via EM, Bayes approach via MCMC) is jeopardized due to the high number of parameters and the heavy censoring.To overcome the problem of heavy censoring for Weibull mixture parameters estimation, we propose a Bayesian bootstrap method, called Bayesian RestorationMaximization.We use an unsupervised clustering method to identify the profiles of vehicle uses. Our method allows to simulate the needs of spare parts for a vehicles fleet for the duration of the contract or for a contract extension
Kubalík, Jakub. "Mining of Textual Data from the Web for Speech Recognition". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237170.
Texto completo da fonteDarnieder, William Francis. "Bayesian Methods for Data-Dependent Priors". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306344172.
Texto completo da fonteWalter, Gero. "Generalized Bayesian inference under prior-data conflict". Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-170598.
Texto completo da fonteDas Thema dieser Dissertation ist die Generalisierung der Bayes-Inferenz durch die Verwendung von unscharfen oder intervallwertigen Wahrscheinlichkeiten. Ein besonderer Fokus liegt dabei auf dem Modellverhalten in dem Fall, dass Vorwissen und beobachtete Daten in Konflikt stehen. Die Bayes-Inferenz ist einer der Hauptansätze zur Herleitung von statistischen Inferenzmethoden. In diesem Ansatz muss (eventuell subjektives) Vorwissen über die Modellparameter in einer sogenannten Priori-Verteilung (kurz: Priori) erfasst werden. Alle Inferenzaussagen basieren dann auf der sogenannten Posteriori-Verteilung (kurz: Posteriori), welche mittels des Satzes von Bayes berechnet wird und das Vorwissen und die Informationen in den Daten zusammenfasst. Wie eine Priori-Verteilung in der Praxis zu wählen sei, ist dabei stark umstritten. Ein großer Teil der Literatur befasst sich mit der Bestimmung von sogenannten nichtinformativen Prioris. Diese zielen darauf ab, den Einfluss der Priori auf die Posteriori zu eliminieren oder zumindest zu standardisieren. Falls jedoch nur wenige Daten zur Verfügung stehen, oder diese nur wenige Informationen in Bezug auf die Modellparameter bereitstellen, kann es hingegen nötig sein, spezifische Priori-Informationen in ein Modell einzubeziehen. Außerdem können sogenannte Shrinkage-Schätzer, die in frequentistischen Ansätzen häufig zum Einsatz kommen, als Bayes-Schätzer mit informativen Prioris angesehen werden. Wenn spezifisches Vorwissen zur Bestimmung einer Priori genutzt wird (beispielsweise durch eine Befragung eines Experten), aber die Stichprobengröße nicht ausreicht, um eine solche informative Priori zu überstimmen, kann sich ein Konflikt zwischen Priori und Daten ergeben. Dieser kann sich darin äußern, dass die beobachtete (und von eventuellen Ausreißern bereinigte) Stichprobe Parameterwerte impliziert, die aus Sicht der Priori äußerst überraschend und unerwartet sind. In solch einem Fall kann es unklar sein, ob eher das Vorwissen oder eher die Validität der Datenerhebung in Zweifel gezogen werden sollen. (Es könnten beispielsweise Messfehler, Kodierfehler oder eine Stichprobenverzerrung durch selection bias vorliegen.) Zweifellos sollte sich ein solcher Konflikt in der Posteriori widerspiegeln und eher vorsichtige Inferenzaussagen nach sich ziehen; die meisten Statistiker würden daher davon ausgehen, dass sich in solchen Fällen breitere Posteriori-Kredibilitätsintervalle für die Modellparameter ergeben. Bei Modellen, die auf der Wahl einer bestimmten parametrischen Form der Priori basieren, welche die Berechnung der Posteriori wesentlich vereinfachen (sogenannte konjugierte Priori-Verteilungen), wird ein solcher Konflikt jedoch einfach ausgemittelt. Dann werden Inferenzaussagen, die auf einer solchen Posteriori basieren, den Anwender in falscher Sicherheit wiegen. In dieser problematischen Situation können Intervallwahrscheinlichkeits-Methoden einen fundierten Ausweg bieten, indem Unsicherheit über die Modellparameter mittels Mengen von Prioris beziehungsweise Posterioris ausgedrückt wird. Neuere Erkenntnisse aus Risikoforschung, Ökonometrie und der Forschung zu künstlicher Intelligenz, die die Existenz von verschiedenen Arten von Unsicherheit nahelegen, unterstützen einen solchen Modellansatz, der auf der Feststellung aufbaut, dass die auf den Ansätzen von Kolmogorov oder de Finetti basierende übliche Wahrscheinlichkeitsrechung zu restriktiv ist, um diesen mehrdimensionalen Charakter von Unsicherheit adäquat einzubeziehen. Tatsächlich kann in diesen Ansätzen nur eine der Dimensionen von Unsicherheit modelliert werden, nämlich die der idealen Stochastizität. In der vorgelegten Dissertation wird untersucht, wie sich Mengen von Prioris für Stichproben aus Exponentialfamilien effizient beschreiben lassen. Wir entwickeln Modelle, die eine ausreichende Flexibilität gewährleisten, sodass eine Vielfalt von Ausprägungen von partiellem Vorwissen beschrieben werden kann. Diese Modelle führen zu vorsichtigen Inferenzaussagen, wenn ein Konflikt zwischen Priori und Daten besteht, und ermöglichen dennoch präzisere Aussagen für den Fall, dass Priori und Daten im Wesentlichen übereinstimmen, ohne dabei die Einsatzmöglichkeiten in der statistischen Praxis durch eine zu hohe Komplexität in der Anwendung zu erschweren. Wir ermitteln die allgemeinen Inferenzeigenschaften dieser Modelle, die sich durch einen klaren und nachvollziehbaren Zusammenhang zwischen Modellunsicherheit und der Präzision von Inferenzaussagen auszeichnen, und untersuchen Anwendungen in verschiedenen Bereichen, unter anderem in sogenannten common-cause-failure-Modellen und in der linearen Bayes-Regression. Zudem werden die in dieser Dissertation entwickelten Modelle mit anderen Intervallwahrscheinlichkeits-Modellen verglichen und deren jeweiligen Stärken und Schwächen diskutiert, insbesondere in Bezug auf die Präzision von Inferenzaussagen bei einem Konflikt von Vorwissen und beobachteten Daten.
Brouwer, Thomas Alexander. "Bayesian matrix factorisation : inference, priors, and data integration". Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/269921.
Texto completo da fonteKaushik, Rituraj. "Data-Efficient Robot Learning using Priors from Simulators". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0105.
Texto completo da fonteAs soon as the robots step out in the real and uncertain world, they have to adapt to various unanticipated situations by acquiring new skills as quickly as possible. Unfortunately, on robots, current state-of-the-art reinforcement learning (e.g., deep-reinforcement learning) algorithms require large interaction time to train a new skill. In this thesis, we have explored methods to allow a robot to acquire new skills through trial-and-error within a few minutes of physical interaction. Our primary focus is to incorporate prior knowledge from a simulator with real-world experiences of a robot to achieve rapid learning and adaptation. In our first contribution, we propose a novel model-based policy search algorithm called Multi-DEX that (1) is capable of finding policies in sparse reward scenarios (2) does not impose any constraints on the type of policy or the type of reward function and (3) is as data-efficient as state-of-the-art model-based policy search algorithm in non-sparse reward scenarios. In our second contribution, we propose a repertoire-based online learning algorithm called APROL which allows a robot to adapt to physical damages (e.g., a damaged leg) or environmental perturbations (e.g., terrain conditions) quickly and solve the given task. In this work, we use several repertoires of policies generated in simulation for a subset of possible situations that the robot might face in real-world. During the online learning, the robot automatically figures out the most suitable repertoire to adapt and control the robot. We show that APROL outperforms several baselines including the current state-of-the-art repertoire-based learning algorithm RTE by solving the tasks in much less interaction times than the baselines. In our third contribution, we introduce a gradient-based meta-learning algorithm called FAMLE. FAMLE meta-trains the dynamical model of the robot from simulated data so that the model can be adapted to various unseen situations quickly with the real-world observations. By using FAMLE with a model-predictive control framework, we show that our approach outperforms several model-based and model-free learning algorithms, and solves the given tasks in less interaction time than the baselines
Fu, Shuai. "Inverse problems occurring in uncertainty analysis". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112208/document.
Texto completo da fonteThis thesis provides a probabilistic solution to inverse problems through Bayesian techniques.The inverse problem considered here is to estimate the distribution of a non-observed random variable X from some noisy observed data Y explained by a time-consuming physical model H. In general, such inverse problems are encountered when treating uncertainty in industrial applications. Bayesian inference is favored as it accounts for prior expert knowledge on Xin a small sample size setting. A Metropolis-Hastings-within-Gibbs algorithm is proposed to compute the posterior distribution of the parameters of X through a data augmentation process. Since it requires a high number of calls to the expensive function H, the modelis replaced by a kriging meta-model. This approach involves several errors of different natures and we focus on measuring and reducing the possible impact of those errors. A DAC criterion has been proposed to assess the relevance of the numerical design of experiments and the prior assumption, taking into account the observed data. Another contribution is the construction of adaptive designs of experiments adapted to our particular purpose in the Bayesian framework. The main methodology presented in this thesis has been applied to areal hydraulic engineering case-study
Xue, Xinyu. "Data preservation in intermittently connected sensor network with data priority". Thesis, Wichita State University, 2013. http://hdl.handle.net/10057/6848.
Texto completo da fonteThesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
Duan, Yuyan. "A Modified Bayesian Power Prior Approach with Applications in Water Quality Evaluation". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29976.
Texto completo da fontePh. D.
Khalili, K. "Enhancing vision data using prior knowledge for assembly applications". Thesis, University of Salford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360432.
Texto completo da fonteRazzaq, Misbah. "Integrating phosphoproteomic time series data into prior knowledge networks". Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0048/document.
Texto completo da fonteTraditional canonical signaling pathways help to understand overall signaling processes inside the cell. Large scale phosphoproteomic data provide insight into alterations among different proteins under different experimental settings. Our goal is to combine the traditional signaling networks with complex phosphoproteomic time-series data in order to unravel cell specific signaling networks. On the application side, we apply and improve a caspo time series method conceived to integrate time series phosphoproteomic data into protein signaling networks. We use a large-scale real case study from the HPN-DREAM BreastCancer challenge. We infer a family of Boolean models from multiple perturbation time series data of four breast cancer cell lines given a prior protein signaling network. The obtained results are comparable to the top performing teams of the HPN-DREAM challenge. We also discovered that the similar models are clustered to getherin the solutions space. On the computational side, we improved the method to discover diverse solutions and improve the computational time
Lindlöf, Angelica. "Deriving Genetic Networks from Gene Expression Data and Prior Knowledge". Thesis, University of Skövde, Department of Computer Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-589.
Texto completo da fonteIn this work three different approaches for deriving genetic association networks were tested. The three approaches were Pearson correlation, an algorithm based on the Boolean network approach and prior knowledge. Pearson correlation and the algorithm based on the Boolean network approach derived associations from gene expression data. In the third approach, prior knowledge from a known genetic network of a related organism was used to derive associations for the target organism, by using homolog matching and mapping the known genetic network to the related organism. The results indicate that the Pearson correlation approach gave the best results, but the prior knowledge approach seems to be the one most worth pursuing
Hira, Zena Maria. "Dimensionality reduction methods for microarray cancer data using prior knowledge". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/33812.
Texto completo da fontePatil, Vivek. "Criteria for Data Consistency Evaluation Prior to Modal Parameter Estimation". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627667589352536.
Texto completo da fontePorter, Erica May. "Applying an Intrinsic Conditional Autoregressive Reference Prior for Areal Data". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91385.
Texto completo da fonteMaster of Science
Spatial data is increasingly relevant in a wide variety of research areas. Economists, medical researchers, ecologists, and policymakers all make critical decisions about populations using data that naturally display spatial dependence. One such data type is areal data; data collected at county, habitat, or tract levels are often spatially related. Most convenient software platforms provide analyses for independent data, as the introduction of spatial dependence increases the complexity of corresponding models and computation. Use of analyses with an independent data assumption can lead researchers and policymakers to make incorrect, simplistic decisions. Bayesian hierarchical models can be used to effectively model areal data because they have flexibility to accommodate complicated dependencies that are common to spatial data. However, use of hierarchical models increases the number of model parameters and requires specification of prior distributions. We present and describe ref.ICAR, an R package available to researchers that automatically implements an objective Bayesian analysis that is appropriate for areal data.
Bogdan, Abigail Marie. "Student Reasoning from Data Tables: Data Interpretation in Light of Student Ability and Prior Belief". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460120122.
Texto completo da fonteHotti, Alexandra. "Bayesian insurance pricing using informative prior estimation techniques". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286312.
Texto completo da fonteStora, väletablerade försäkringsbolag modellerar sina riskpremier med hjälp av statistiska modeller och data från skadeanmälningar. Eftersom försäkringsbolagen har tillgång till en lång historik av skadeanmälningar, så kan de förutspå sina framtida skadeanmälningskostnader med hög precision. Till skillnad från ett stort försäkringsbolag, har en liten, nyetablerad försäkringsstartup inte tillgång till den mängden data. Det nyetablerade försäkringsbolagets initiala prissättningsmodell kan därför istället byggas genom att direkt estimera parametrarna i en tariff med ett icke statistiskt tillvägagångssätt. Problematiken med en sådan metod är att tariffens parametrar inte kan justerares baserat på bolagets egna skadeanmälningar med klassiska frekvensbaserade prissättningsmetoder. I denna masteruppsats presenteras tre metoder för att estimera en existerande statisk multiplikativ tariff. Estimaten kan användas som en prior i en Bayesiansk riskpremiemodell. Likheten mellan premierna som har satts via den estimerade och den faktiska statiska tariffen utvärderas genom att beräkna deras relativa skillnad. Resultaten från jämförelsen tyder på att priorn kan estimeras med hög precision. De estimerade priorparametrarna kombinerades sedan med startupbolaget Hedvigs skadedata. Posteriorn estimerades sedan med Metropolis and Metropolis-Hastings, vilket är två Markov Chain Monte Carlo simuleringsmetoder. Sammantaget resulterade detta i en prissättningsmetod som kunde utnyttja kunskap från en existerande statisk prismodell, samtidigt som den kunde ta in mer kunskap i takt med att fler skadeanmälningar skedde. Resultaten tydde på att de Bayesianska prissättningsmetoderna kunde förutspå skadekostnader med högre precision jämfört med den statiska tariffen.
Amaliksen, Ingvild. "Bayesian Inversion of Time-lapse Seismic Data using Bimodal Prior Models". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24625.
Texto completo da fonteAggarwal, Deepti. "Inferring Signal Transduction Pathways from Gene Expression Data using Prior Knowledge". Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56601.
Texto completo da fonteMaster of Science
Rincé, Romain. "Behavior recognition on noisy data-streams constrained by complex prior knowledge". Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4085/document.
Texto completo da fonteComplex Event Processing (CEP) consists of the analysis of data-streams in order to extract particular patterns and behaviours described, in general, in a logical formalism. In the classical approach, data of a stream – or events – are supposed to be the complete and perfect observation of the system producing these events. However, in many cases, the means for collecting such data, such as sensors, are not infallible and may miss the detection of a particular event or on the contrary produce. In this thesis, we have studied the possible models of representation of uncertainty and, thus, to offer the CEP a robustness to this uncertainty as well as the necessary tools to allow the recognition of complex behaviours based on the chronicle formalism. In this perspective, three approaches have been considered. The first one is based on Markov logical networks to represent the structure of the chronicles under a set of logical formulas of a confidence value. We show that this model, although widely applied in the literature, is inapplicable for a realistic application with regard to the dimensions of such a problem. The second approach is based on techniques from the SAT community to enumerate all possible solutions of a given problem and thus to produce a confidence value for the recognition of a chronicle expressed, again, under a logical structure. Finally, we propose a last approach based on the Markov chains to produce a set of samples explaining the evolution of the model in agreement with the observed data. These samples are then analysed by a recognition system to count the occurrences of a particular chronicle
BAI, PENG. "Stochastic inversion of time domain electromagnetic data with non-trivial prior". Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/328807.
Texto completo da fonteFiloche, Arthur. "Variational Data Assimilation with Deep Prior. Application to Geophysical Motion Estimation". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS416.
Texto completo da fonteThe recent revival of deep learning has impacted the state of the art in many scientific fields handling high-dimensional data. In particular, the availability and flexibility of algorithms have allowed the automation of inverse problem solving, learning estimators directly from data. This paradigm shift has also reached the research field of numerical weather prediction. However, the inherent issues in geo-sciences such as imperfect data and the lack of ground truth complicate the direct application of learning methods. Classical data assimilation algorithms, framing these issues and allowing the use of physics-based constraints, are currently the methods of choice in operational weather forecasting centers. In this thesis, we experimentally study the hybridization of deep learning and data assimilation algorithms, with the objective of correcting forecast errors due to incomplete physical models or uncertain initial conditions. First, we highlight the similarities and nuances between variational data assimilation and deep learning. Following the state of the art, we exploit the complementarity of the two approaches in an iterative algorithm to then propose an end-to-end learning method. In a second part, we address the core of the thesis: variational data assimilation with deep prior, regularizing classical estimators with convolutional neural networks. The idea is declined in various algorithms including optimal interpolation, 4DVAR with strong and weak constraints, simultaneous assimilation, and super-resolution or uncertainty estimation. We conclude with perspectives on the proposed hybridization
Zhang, Xiang. "Analysis of Spatial Data". UKnowledge, 2013. http://uknowledge.uky.edu/statistics_etds/4.
Texto completo da fonteChing, Kai-Sang. "Priority CSMA schemes for integrated voice and data transmission". Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28372.
Texto completo da fonteApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Walter, Gero [Verfasser], e Thomas [Akademischer Betreuer] Augustin. "Generalized Bayesian inference under prior-data conflict / Gero Walter. Betreuer: Thomas Augustin". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2013. http://d-nb.info/1052779247/34.
Texto completo da fonteWahlqvist, Kristian. "A Comparison of Motion Priors for EKF-SLAM in Autonomous Race Cars". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262673.
Texto completo da fonteSimultan lokalisering och kartläggning (SLAM) är ett grundläggande problem att lösa för alla typer av autonoma fordon eller robotar. SLAM handlar om problemet för en agent att inkrementellt konstruera en karta av sin omgivning samtidigt som den håller koll på sin position inom kartan, med hjälp av diverse sensorer. Målet med detta examensarbete är att demonstrera skillnader och begränsningar för olika metoder att uppskatta bilens momentana förflyttning, när denna momentana förflyttning används som en fösta skattning av bilens rörelse för SLAM-algoritmen EKF-SLAM. Utvärderingen grundar sig i autonom motorsport och de undersökta algoritmerna utvärderades under speciellt svåra förhållanden så som hög hastighet och när bilen sladdar. Tre olika metoder för att skatta bilens momentana förflyttning implementerades där samtliga metoder baseras på data från olika sensorer. Dessa var den visuella metoden Libviso2 som använder stereokameror, den flödesbaserade metoden RF2O som använder en 2D lidar, samt en metod som baseras på hjulens rotationshastighet som kombinerades med fordonets uppmätta vinkelhastighet från ett gyroskop. De olika algoritmerna utvärderades separat på data som genererats genom att köra en modifierad radiostyrd bil runt olika banor utmarkerade av trafikkoner, samt för olika nivåer av aggressiv körstil. Estimeringen av bilens bana och konernas positioner jämfördes sedan separat i termer av medelabsolutfel samt beräkningstid för varje metod och bana. Resultaten visar att Libviso2 ger en bra skattning av den momentana förflyttningen och presterar konsekvent över samtliga tester. RF2O och metoden baserad på hjulens rotationshastighet var i vissa fall tillräckligt bra för korrekt lokalisering och kartläggning, men presterade dåligt i andra fall.
Abufadel, Amer Y. "4D Segmentation of Cardiac MRI Data Using Active Surfaces with Spatiotemporal Shape Priors". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14005.
Texto completo da fonteSonksen, Michael David. "Bayesian Model Diagnostics and Reference Priors for Constrained Rate Models of Count Data". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1312909127.
Texto completo da fonteGrün, Bettina, e Gertraud Malsiner-Walli. "Bayesian Latent Class Analysis with Shrinkage Priors: An Application to the Hungarian Heart Disease Data". FedOA -- Federico II University Press, 2018. http://epub.wu.ac.at/6612/1/heart.pdf.
Texto completo da fonteNagy, Arnold B. "Priority area performance and planning areas with limited biological data". Thesis, University of Sheffield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425193.
Texto completo da fonteOstovari, Pouya. "Priority-Based Data Transmission in Wireless Networks using Network Coding". Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/360800.
Texto completo da fontePh.D.
With the rapid development of mobile devices technology, they are becoming very popular and a part of our everyday lives. These devices, which are equipped with wireless radios, such as cellular and WiFi radios, affect almost every aspect of our lives. People use smartphone and tablets to access the Internet, watch videos, chat with their friends, and etc. The wireless connections that these devices provide is more convenient than the wired connections. However, there are two main challenges in wireless networks: error-prone wireless links and network resources limitation. Network coding is widely used to provide reliable data transmission and to use the network resources efficiently. Network coding is a technique in which the original packets are mixed together using algebraic operations. In this dissertation, we study the applications of network coding in making the wireless transmissions robust against transmission errors and in efficient resource management. In many types of data, the importance of different parts of the data are different. For instance, in the case of numeric data, the importance of the data decreases from the most significant to the least significant bit. Also, in multi-layer videos, the importance of the packets in different layers of the videos are not the same. We propose novel data transmission methods in wireless networks that considers the unequal importance of the different parts of the data. In order to provide robust data transmissions and use the limited resources efficiently, we use random linear network coding technique, which is a type of network coding. In the first part of this dissertation, we study the application of network coding in resource management. In order to use the the limited storage of cache nodes efficiently, we propose to use triangular network coding for content distribution. We also design a scalable video-on-demand system, which uses helper nodes and network coding to provide users with their desired video quality. In the second part, we investigate the application of network coding in providing robust wireless transmissions. We propose symbol-level network coding, in which each packet is partitioned to symbols with different importance. We also propose a method that uses network coding to make multi-layer videos robust against transmission errors.
Temple University--Theses
Li, Bin. "Statistical learning and predictive modeling in data mining". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155058111.
Texto completo da fonteCrabb, Ryan Eugene. "Fast Time-of-Flight Phase Unwrapping and Scene Segmentation Using Data Driven Scene Priors". Thesis, University of California, Santa Cruz, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=3746704.
Texto completo da fonteThis thesis regards the method of full field time-of-flight depth imaging by way of amplitude modulated continuous wave signals correlated with step-shifted reference waveforms using a specialized solid state CMOS sensor, referred to as photonic mixing device. The specific focus deals with the inherent issue of depth ambiguity due to a fundamental property of periodic signals: that they repeat, or wrap, after each period, and any signal shifted by a whole number of wavelengths is indistinguishable from the original. Recovering the full extent of the signal’s path is known as phase unwrapping. The common, accepted solution requires the imaging of a series of two or more signals with differing modulation frequencies to resolve the ambiguity, the time delay of which will result in erroneous or invalid measurements for non-static elements of the scene. This work details a physical model of the observable illumination of the scene which provides priors for a novel probabilistic framework to recover the scene geometry by imaging only a single modulated signal. It is demonstrated that this process is able to provide more than adequate results in a majority of representative scenes, and that it can be accomplished on typical computer hardware at a speed that allows for the range imaging to be utilized in real-time, interactive applications.
One such real-time application is presented: alpha-matting, or foreground segmentation, for background substitution of live video. This is a generalized version of the common technique of green-screening that is utilized, for example, by every local weather reporter. The presented method, however, requires no special background, and is able to perform on high resolution video from a lower resolution depth image.
Tarca, Adi-Laurentiu. "Neural networks in multiphase reactors data mining: feature selection, prior knowledge, and model design". Thesis, Université Laval, 2004. http://www.theses.ulaval.ca/2004/21673/21673.pdf.
Texto completo da fonteArtificial neural networks (ANN) have recently gained enormous popularity in many engineering fields, not only for their appealing “learning ability,” but also for their versatility and superior performance with respect to classical approaches. Without supposing a particular equational form, ANNs mimic complex nonlinear relationships that might exist between an input feature vector x and a dependent (output) variable y. In the context of multiphase reactors the potential of neural networks is high as the modeling by resolution of first principle equations to forecast sought key hydrodynamics and transfer characteristics is intractable. The general-purpose applicability of neural networks in regression and classification, however, poses some subsidiary difficulties that can make their use inappropriate for certain modeling problems. Some of these problems are general to any empirical modeling technique, including the feature selection step, in which one has to decide which subset xs ⊂ x should constitute the inputs (regressors) of the model. Other weaknesses specific to the neural networks are overfitting, model design ambiguity (architecture and parameters identification), and the lack of interpretability of resulting models. This work addresses three issues in the application of neural networks: i) feature selection ii) prior knowledge matching within the models (to answer to some extent the overfitting and interpretability issues), and iii) the model design. Feature selection was conducted with genetic algorithms (yet another companion from artificial intelligence area), which allowed identification of good combinations of dimensionless inputs to use in regression ANNs, or with sequential methods in a classification context. The type of a priori knowledge we wanted the resulting ANN models to match was the monotonicity and/or concavity in regression or class connectivity and different misclassification costs in classification. Even the purpose of the study was rather methodological; some resulting ANN models might be considered contributions per se. These models-- direct proofs for the underlying methodologies-- are useful for predicting liquid hold-up and pressure drop in counter-current packed beds and flow regime type in trickle beds.
Mehdi, Riyadh Abdul Kadir. "An investigation of object recognition using spatial data and the concept of prior expectation". Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291671.
Texto completo da fonteJeanmougin, Marine. "Statistical methods for robust analysis of transcriptome data by integration of biological prior knowledge". Thesis, Evry-Val d'Essonne, 2012. http://www.theses.fr/2012EVRY0029/document.
Texto completo da fonteRecent advances in Molecular Biology have led biologists toward high-throughput genomic studies. In particular, the investigation of the human transcriptome offers unprecedented opportunities for understanding cellular and disease mechanisms. In this PhD, we put our focus on providing robust statistical methods dedicated to the treatment and the analysis of high-throughput transcriptome data. We discuss the differential analysis approaches available in the literature for identifying genes associated with a phenotype of interest and propose a comparison study. We provide practical recommendations on the appropriate method to be used based on various simulation models and real datasets. With the eventual goal of overcoming the inherent instability of differential analysis strategies, we have developed an innovative approach called DiAMS, for DIsease Associated Modules Selection. This method was applied to select significant modules of genes rather than individual genes and involves the integration of both transcriptome and protein interactions data in a local-score strategy. We then focus on the development of a framework to infer gene regulatory networks by integration of a biological informative prior over network structures using Gaussian graphical models. This approach offers the possibility of exploring the molecular relationships between genes, leading to the identification of altered regulations potentially involved in disease processes. Finally, we apply our statistical developments to study the metastatic relapse of breast cancer