Auswahl der wissenschaftlichen Literatur zum Thema „A priori data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "A priori data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "A priori data"

1

Filippidis, A. „Data fusion using sensor data and a priori information“. Control Engineering Practice 4, Nr. 1 (Januar 1996): 43–53. http://dx.doi.org/10.1016/0967-0661(95)00205-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pankov, A. R., und A. M. Skuridin. „Data Processing Under A Priori Statistical Uncertainty“. IFAC Proceedings Volumes 19, Nr. 5 (Mai 1986): 213–17. http://dx.doi.org/10.1016/s1474-6670(17)59796-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Roberts, R. A. „51458 Limited data tomography using support minimization with a priori data“. NDT & E International 27, Nr. 2 (April 1994): 105–6. http://dx.doi.org/10.1016/0963-8695(94)90364-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McKee, B. T. A. „Deconvolution of noisy data using a priori constraints“. Canadian Journal of Physics 67, Nr. 8 (01.08.1989): 821–26. http://dx.doi.org/10.1139/p89-142.

Der volle Inhalt der Quelle
Annotation:
The deconvolution of experimental data with noise present is discussed with emphasis on the ill-conditioned nature of the problem and on the need for a priori constraints to select a solution. An analytical Fourier space deconvolution that selects the minimum-norm solution subject to a least-squares constraint is described. The cutoff parameter used in the constraint is evaluated from the statistical fluctuations of the data. This method is illustrated by its application to 3-d image reconstruction for positron emission tomography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lestari, Putri Anggraini, Marnis Nasution und Syaiful Zuhri Harahap. „Analisa Data Penjualan Pada Apotek Ritonga Farma Menggunakan Data Mining Apriori“. INFORMATIKA 12, Nr. 2 (14.12.2024): 180–89. https://doi.org/10.36987/informatika.v12i2.5651.

Der volle Inhalt der Quelle
Annotation:
A pharmacy is a place or business that is specifically dedicated to providing medicines and other health products to the public. This place is also known as a drugstore or drug store in some countries. Pharmacies provide medicines both prescribed by doctors and over-the-counter (over-the-counter), to help patients cope with health problems they are experiencing.Some pharmacies also offer additional services such as blood pressure checks, vaccinations, simple health checks, and health counseling to the public. In applying a priori methods to pharmacy, a deep understanding of data structure and proper product classification is needed to overcome this problem. By knowing the pattern of frequent purchases, pharmacies can place items that are often purchased together close together on shelves or strategic locations. This can increase the convenience of buyers and speed up the purchase process. A priori methods are techniques in data mining that are used to find hidden patterns or associations in large datasets. A priori methods look for relationships between items in a dataset that often appear together. The main principle of the a priori method is that if an item-set appears frequently together, then it is likely that the item-set will also appear frequently together in other transactions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yi, Guodong, Chuanyuan Zhou, Yanpeng Cao und Hangjian Hu. „Hybrid Assembly Path Planning for Complex Products by Reusing a Priori Data“. Mathematics 9, Nr. 4 (17.02.2021): 395. http://dx.doi.org/10.3390/math9040395.

Der volle Inhalt der Quelle
Annotation:
Assembly path planning (APP) for complex products is challenging due to the large number of parts and intricate coupling requirements. A hybrid assembly path planning method is proposed herein that reuses a priori paths to improve the efficiency and success ratio. The assembly path is initially segmented to improve its reusability. Subsequently, the planned assembly paths are employed as a priori paths to establish an a priori tree, which is expanded according to the bounding sphere of the part to create the a priori space for path searching. Three rapidly exploring random tree (RRT)-based algorithms are studied for path planning based on a priori path reuse. The RRT* algorithm establishes the new path exploration tree in the early planning stage when there is no a priori path to reuse. The static RRT* (S-RRT*) and dynamic RRT* (D-RRT*) algorithms form the connection between the exploration tree and the a priori tree with a pair of connection points after the extension of the exploration tree to a priori space. The difference between the two algorithms is that the S-RRT* algorithm directly reuses an a priori path and obtains a new path through static backtracking from the endpoint to the starting point. However, the D-RRT* algorithm further extends the exploration tree via the dynamic window approach to avoid collision between an a priori path and obstacles. The algorithm subsequently obtains a new path through dynamic and non-continuous backtracking from the endpoint to the starting point. A hybrid process combining the RRT*, S-RRT*, and D-RRT* algorithms is designed to plan the assembly path for complex products in several cases. The performances of these algorithms are compared, and simulations indicate that the S-RRT* and D-RRT* algorithms are significantly superior to the RRT* algorithm in terms of the efficiency and success ratio of APP. Therefore, hybrid path planning combining the three algorithms is helpful to improving the assembly path planning of complex products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Peysson, Flavien, Abderrahmane Boubezoul, Mustapha Ouladsine und Rachid Outbib. „A Data Driven Prognostic Methodology without a Priori Knowledge“. IFAC Proceedings Volumes 42, Nr. 8 (2009): 1462–67. http://dx.doi.org/10.3182/20090630-4-es-2003.00238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Menon, Nanda K., und John A. Hunt. „Optimizing EELS data sets using ‘a priori’ spectrum simulation“. Microscopy and Microanalysis 8, S02 (August 2002): 620–21. http://dx.doi.org/10.1017/s1431927602106040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Suhadi, Suhadi, Carsten Last und Tim Fingscheidt. „A Data-Driven Approach to A Priori SNR Estimation“. IEEE Transactions on Audio, Speech, and Language Processing 19, Nr. 1 (Januar 2011): 186–95. http://dx.doi.org/10.1109/tasl.2010.2045799.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nasari, Fina Nasari. „Algoritma A Priori Dalam Pengelompokkan Data Pendaftaran Mahasiswa Baru“. Jurnal Sains dan Ilmu Terapan 4, Nr. 1 (01.07.2021): 40–45. http://dx.doi.org/10.59061/jsit.v4i1.102.

Der volle Inhalt der Quelle
Annotation:
Pendaftaran mahasiswa baru merupakan proses penerimaan mahasiswa baru. Proses penerimaan mahasiswa baru yang dilakukan setiap tahunnya menghasilkan data yang semakin besar. Data yang besar jika tidak dimanfaatkan dengan baik, hanya akan memenuhkan memory penyimpanan. Data mining menjadi salah satu solusi untuk mendapatkan sebuah informasi baru dari pengolahan data-data lama. Ada beberapa fungsi data maning diantaranya assosiasi, klasifikasi, clustering, prediksi serta pengenalan pola. Algoritma Apriori adalah salah satu algoritma dalam data mining yang melihat hubungan antara satu item set dengan item set yang lain(assosiasi antara satu item set dengan item set yang lain). Hubungan yang akan dilihat dalam penelitian ini adalah hubungan antara asal sekolah menengah atas dengan program studi yang akan dipilih. Hasil yang diperoleh dari penelitian ini adalah calon mahasiswa yang memilih program studi sistem informasi adalah calon mahasis yang berasal dari SMK(Sekolah menengah Kejuruan) dengan nilai confidence 83 % begitu juga pada program studi teknik informatika, calon mahasiswa berasal dari SMK(Sekolah menengah Kejuruan) dengan nilai confidence 83 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "A priori data"

1

SALAS, PERCY ENRIQUE RIVERA. „STDTRIP: AN A PRIORI DESIGN PROCESS FOR PUBLISHING LINKED DATA“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28907@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A abordagem de Dados Abertos tem como objetivo promover a interoperabilidade de dados na Web. Consiste na publicação de informações em formatos que permitam seu compartilhamento, descoberta, manipulação e acesso por parte de usuários e outros aplicativos de software. Essa abordagem requer a triplificação de conjuntos de dados, ou seja, a conversão do esquema de bases de dados relacionais, bem como suas instâncias, em triplas RDF. Uma questão fundamental neste processo é decidir a forma de representar conceitos de esquema de banco de dados em termos de classes e propriedades RDF. Isto é realizado através do mapeamento das entidades e relacionamentos para um ou mais vocabulários RDF, usados como base para a geração das triplas. A construção destes vocabulários é extremamente importante, porque quanto mais padrões são utilizados, melhor o grau de interoperabilidade com outros conjuntos de dados. No entanto, as ferramentas disponíveis atualmente não oferecem suporte adequado ao reuso de vocabulários RDF padrão no processo de triplificação. Neste trabalho, apresentamos o processo StdTrip, que guia usuários no processo de triplificação, promovendo o reuso de vocabulários de forma a assegurar interoperabilidade dentro do espaço da Linked Open Data (LOD).
Open Data is a new approach to promote interoperability of data in the Web. It consists in the publication of information produced, archived and distributed by organizations in formats that allow it to be shared, discovered, accessed and easily manipulated by third party consumers. This approach requires the triplification of datasets, i.e., the conversion of database schemata and their instances to a set of RDF triples. A key issue in this process is deciding how to represent database schema concepts in terms of RDF classes and properties. This is done by mapping database concepts to an RDF vocabulary, used as the base for generating the triples. The construction of this vocabulary is extremely important, because the more standards are reused, the easier it will be to interlink the result to other existing datasets. However, tools available today do not support reuse of standard vocabularies in the triplification process, but rather create new vocabularies. In this thesis, we present the StdTrip process that guides users in the triplification process, while promoting the reuse of standard, RDF vocabularies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Egidi, Leonardo. „Developments in Bayesian Hierarchical Models and Prior Specification with Application to Analysis of Soccer Data“. Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427270.

Der volle Inhalt der Quelle
Annotation:
In the recent years the challenge for new prior specifications and for complex hierarchical models became even more relevant in Bayesian inference. The advent of the Markov Chain Monte Carlo techniques, along with new probabilistic programming languages and new algorithms, extended the boundaries of the field, both in theoretical and applied directions. In the present thesis, we address theoretical and applied tasks. In the first part we propose a new class of prior distributions which might depend on the data and specified as a mixture between a noninformative and an informative prior. The generic prior belonging to this class provides less information than an informative prior and is more likely to not dominate the inference when the data size is small or moderate. Such a distribution is well suited for robustness tasks, especially in case of informative prior misspecification. Simulation studies within the conjugate models show that this proposal may be convenient for reducing the mean squared errors and improving the frequentist coverage. Furthermore, under mild conditions this class of distributions yields some other nice theoretical properties. In the second part of the thesis we use hierarchical Bayesian models for predicting some soccer quantities and we extend the usual match goals’ modeling strategy by including the bookmakers’ information directly in the model. Posterior predictive checks on in-sample and out-of sample data show an excellent model fit, a good model calibration and, ultimately, the possibility for building efficient betting strategies.
Negli ultimi anni la sfida per la specificazione di nuove distribuzioni a priori e per l’uso di complessi modelli gerarchici è diventata ancora più rilevante all’interno dell’inferenza Bayesiana. L’avvento delle tecniche Markov Chain Monte Carlo, insieme a nuovi linguaggi di programmazione probabilistici, ha esteso i confini del campo, sia in direzione teorica che applicata. Nella presente tesi ci dedichiamo a obiettivi teorici e applicati. Nella prima parte proponiamo una nuova classe di distribuzioni a priori che dipendono dai dati e che sono specificate tramite una mistura tra una a priori non informativa e una a priori informativa. La generica distribuzione appartenente a questa nuova classe fornisce meno informazione di una priori informativa e si candida a non dominare le conclusioni inferenziali quando la dimensione campionaria è piccola o moderata. Tale distribuzione `e idonea per scopi di robustezza, specialmente in caso di scorretta specificazione della distribuzione a priori informativa. Alcuni studi di simulazione all’interno di modelli coniugati mostrano che questa proposta può essere conveniente per ridurre gli errori quadratici medi e per migliorare la copertura frequentista. Inoltre, sotto condizioni non restrittive, questa classe di distribuzioni d`a luogo ad alcune altre interessanti proprietà teoriche. Nella seconda parte della tesi usiamo la classe dei modelli gerarchici Bayesiani per prevedere alcune grandezze relative al gioco del calcio ed estendiamo l’usuale modellazione per i goal includendo nel modello un’ulteriore informazione proveniente dalle case di scommesse. Strumenti per sondare a posteriori la bontà di adattamento del modello ai dati mettono in luce un’ottima aderenza del modello ai dati in possesso, una buona calibrazione dello stesso e suggeriscono, infine, la costruzione di efficienti strategie di scommesse per dati futuri.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tan, Rong Kun Jason. „Scalable Data-agnostic Processing Model with a Priori Scheduling for the Cloud“. Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/75449.

Der volle Inhalt der Quelle
Annotation:
Cloud computing is identified to be a promising solution to performing big data analytics. However, the maximization of cloud utilization incorporated with optimizing intranode, internode, and memory management is still an open-ended challenge. This thesis presents a novel resource allocation model for cloud to load-balance data-agnostic tasks, minimizing intranode and internode delays, and decreasing memory consumption where these processes are involved in big data analytics. In conclusion, the proposed model outperforms existing techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bussy, Victor. „Integration of a priori data to optimise industrial X-ray tomographic reconstruction“. Electronic Thesis or Diss., Lyon, INSA, 2024. http://www.theses.fr/2024ISAL0116.

Der volle Inhalt der Quelle
Annotation:
Cette thèse explore des sujets de recherche dans le domaine du contrôle non destructif industriel par rayons X (CND). L’application de la tomographie CT s’est considérablement étendue et son utilisation s'est intensifiée dans de nombreux secteurs industriels. En raison des exigences croissantes et des contraintes sur les processus de contrôle, la CT se doit de constamment évoluer et s'adapter. Que ce soit en termes de qualité de reconstruction ou en temps d’inspection, la tomographie par rayons X est en constante progression, notamment dans ce qu’on appelle la stratégie de vues éparses. Cette stratégie consiste à reconstruire un objet en utilisant le minimum possible de projections radiologiques tout en maintenant une qualité de reconstruction satisfaisante. Cette approche réduit les temps d'acquisition et les coûts associés. La reconstruction en vues éparses constitue un véritable défi car le problème tomographique est mal conditionné, on le dit mal posé. De nombreuses techniques ont été développées pour surmonter cet obstacle, dont plusieurs sont basées sur l'utilisation d'informations a priori lors du processus de reconstruction. En exploitant les données et les connaissances disponibles avant l'expérience, il est possible d'améliorer le résultat de la reconstruction malgré le nombre réduit de projections. Dans notre contexte industriel, par exemple, le modèle de conception assistée par ordinateur (CAO) de l’objet est souvent disponible, ce qui représente une information précieuse sur la géométrie de l’objet étudié. Néanmoins, il est important de noter que le modèle CAO ne fournit qu’une représentation approximative de l'objet. En CND ou en métrologie, ce sont précisément les différences entre un objet et son modèle CAO qui sont d'intérêt. Par conséquent, l'intégration d'informations a priori est complexe car ces informations sont souvent "approximatives" et ne peuvent pas être utilisées telles quelles. Nous proposons plutôt d’utiliser judicieusement les informations géométriques disponibles à partir du modèle CAO à chaque étape du processus. Nous ne proposons donc pas une méthode, mais une méthodologie pour l'intégration des informations géométriques a priori lors la reconstruction tomographique par rayons X
This thesis explores research topics in the field of industrial non-destructive testing (NDT) using X-rays. The application of CT tomography has significantly expanded, and its use has intensified across many industrial sectors. Due to increasing demands and constraints on inspection processes, CT must continually evolve and adapt. Whether in terms of reconstruction quality or inspection time, X-ray tomography is constantly progressing, particularly in the so-called sparse-view strategy. This strategy involves reconstructing an object using the minimum possible number of radiographic projections while maintaining satisfactory reconstruction quality. This approach reduces acquisition times and associated costs. Sparse-view reconstruction poses a significant challenge as the tomographic problem is ill-conditioned, or, as it is often described, ill-posed. Numerous techniques have been developed to overcome this obstacle, many of which rely on leveraging prior information during the reconstruction process. By exploiting data and knowledge available before the experiment, it is possible to improve reconstruction results despite the reduced number of projections. In our industrial context, for example, the computer-aided design (CAD) model of the object is often available, which provides valuable information about the geometry of the object under study. However, it is important to note that the CAD model only offers an approximate representation of the object. In NDT or metrology, it is precisely the differences between an object and its CAD model that are of interest. Therefore, integrating prior information is complex, as this information is often "approximate" and cannot be used as is. Instead, we propose to judiciously use the geometric information available from the CAD model at each step of the process. We do not propose a single method but rather a methodology for integrating prior geometric information during X-ray tomographic reconstruction
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Skrede, Ole-Johan. „Explicit, A Priori Constrained Model Parameterization for Inverse Problems, Applied on Geophysical CSEM Data“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27343.

Der volle Inhalt der Quelle
Annotation:
This thesis introduce a new parameterization of the model space in global inversion problems. The parameterization provides an explicit representation of the model space with a basis constrained on a priori information about the problem at hand. It is able to represent complex model structures with few parameters, and thereby enhancing the speed of the inversion, as the number of iterations needed to converge is heavily scaled with the number of parameters in stochastic, global inversion methods. A standard Simulated Annealing optimization routine is implemented, and further extended to be able to optimize for a dynamically varying number of variables. The method is applied on inversion of marine CSEM data, and inverts both synthetic and real data sets and is able to recover resistivity profiles that demonstrate good resemblance with provided well bore log data. The trans-dimensional, self-parameterizing Simulated Annealing algorithm which is introduced in this thesis proves to be superior to the regular algorithm with fixed parameter dimensions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kindlund, Andrée. „Inversion of SkyTEM Data Based on Geophysical Logging Results for Groundwater Exploration in Örebro, Sweden“. Thesis, Luleå tekniska universitet, Geovetenskap och miljöteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85315.

Der volle Inhalt der Quelle
Annotation:
Declining groundwater tables threatens several municipalities in Sweden which drinking water is collected from. To ensure a sound drinking water supply, the Geological Survey of Sweden has initiated a groundwater exploration plan. Airborne electromagnetic measurements have seen an uprise in hydrogeophysical projects and have a great potential to achieve high-quality models, especially when combined with drilling data. In 2018, the Geological Survey of Sweden conducted an airborne electromagnetic survey, using the SkyTEM system, in the outskirts of Örebro, Sweden. SkyTEM is a time-domain system and is the most favoured system in hydrogeophysical investigations and was developed especially with hydrogeophysical applications in mind. It is unique by being able to measure interleaved low and high moment current pulses which enables for both high resolution close to surface and increased depth of investigation. During 2019, further drilling in the area including both lithological, and geophysical logging including natural gamma and normal resistivity were carried out. High natural gamma radiation typically indicates content of clay in the rocks. The geology in the area is well explored since the 1940’s when oil was extracted from alum shale in Kvarntorp, located in the survey area. Rocks of sedimentary origin reaches approximately 80 m down until contact with bedrock. Well preserved layers of limestone, shale, alum shale and sandstone are common throughout the area. Combining SkyTEM data with borehole data increases the confidence and generates a model better reflecting the geology in the area. The AarhusInv inversion code was used to perform the modelling, developed by the HydroGeophysical Group (HGG) at Aarhus University, Denmark. Four different models along one single line were generated by using 3, 4, 6 and 30 layers for the reference model in the inversion. Horizontal constraints were applied to all models. Vertical constraints were only applied to the 30 layer model. The survey flight altitude is considered high and in combination with removal of data points being affected by noise, the maximum number of layers in the final model is limited to three. This suggests that the 3 layer model is the most representative model for this survey. The conductive shale seen in the geophysical log is visible in all models at a depth of roughly 40-60 m which is consistent with the geophysical log. No information is retrieved below the shale which concludes that the contact between the sandstone and crystalline rock is not resolved. The lack of information below a highly conductive structure is expected due to shielding effects. This study recommend to carefully assess the flight altitude at quality-control analysis during survey design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Beretta, Valentina. „évaluation de la véracité des données : améliorer la découverte de la vérité en utilisant des connaissances a priori“. Thesis, IMT Mines Alès, 2018. http://www.theses.fr/2018EMAL0002/document.

Der volle Inhalt der Quelle
Annotation:
Face au danger de la désinformation et de la prolifération de fake news (fausses nouvelles) sur le Web, la notion de véracité des données constitue un enjeu crucial. Dans ce contexte, il devient essentiel de développer des modèles qui évaluent de manière automatique la véracité des informations. De fait, cette évaluation est déjà très difficile pour un humain, en raison notamment du biais de confirmation qui empêche d’évaluer objectivement la fiabilité des informations. De plus, la quantité d'informations disponibles sur le Web rend cette tâche quasiment impossible. Il est donc nécessaire de disposer d'une grande puissance de calcul et de développer des méthodes capables d'automatiser cette tâche.Dans cette thèse, nous nous concentrons sur les modèles de découverte de la vérité. Ces approches analysent les assertions émises par différentes sources afin de déterminer celle qui est la plus fiable et digne de confiance. Cette étape est cruciale dans un processus d'extraction de connaissances, par exemple, pour constituer des bases de qualité, sur lesquelles pourront s'appuyer différents traitements ultérieurs (aide à la décision, recommandation, raisonnement…). Plus précisément, les modèles de la littérature sont des modèles non supervisés qui reposent sur un postulat : les informations exactes sont principalement fournies par des sources fiables et des sources fiables fournissent des informations exactes.Les approches existantes faisaient jusqu'ici abstraction de la connaissance a priori d'un domaine. Dans cette contribution, nous montrons comment les modèles de connaissance (ontologies de domaine) peuvent avantageusement être exploités pour améliorer les processus de recherche de vérité. Nous insistons principalement sur deux approches : la prise en compte de la hiérarchisation des concepts de l'ontologie et l'identification de motifs dans les connaissances qui permet, en exploitant certaines règles d'association, de renforcer la confiance dans certaines assertions. Dans le premier cas, deux valeurs différentes ne seront plus nécessairement considérées comme contradictoires ; elles peuvent, en effet, représenter le même concept mais avec des niveaux de détail différents. Pour intégrer cette composante dans les approches existantes, nous nous basons sur les modèles mathématiques associés aux ordres partiels. Dans le second cas, nous considérons des modèles récurrents (modélisés en utilisant des règles d'association) qui peuvent être dérivés à partir des ontologies et de bases de connaissances existantes. Ces informations supplémentaires peuvent renforcer la confiance dans certaines valeurs lorsque certains schémas récurrents sont observés. Chaque approche est validée sur différents jeux de données qui sont rendus disponibles à la communauté, tout comme le code de calcul correspondant aux deux approches
The notion of data veracity is increasingly getting attention due to the problem of misinformation and fake news. With more and more published online information it is becoming essential to develop models that automatically evaluate information veracity. Indeed, the task of evaluating data veracity is very difficult for humans. They are affected by confirmation bias that prevents them to objectively evaluate the information reliability. Moreover, the amount of information that is available nowadays makes this task time-consuming. The computational power of computer is required. It is critical to develop methods that are able to automate this task.In this thesis we focus on Truth Discovery models. These approaches address the data veracity problem when conflicting values about the same properties of real-world entities are provided by multiple sources.They aim to identify which are the true claims among the set of conflicting ones. More precisely, they are unsupervised models that are based on the rationale stating that true information is provided by reliable sources and reliable sources provide true information. The main contribution of this thesis consists in improving Truth Discovery models considering a priori knowledge expressed in ontologies. This knowledge may facilitate the identification of true claims. Two particular aspects of ontologies are considered. First of all, we explore the semantic dependencies that may exist among different values, i.e. the ordering of values through certain conceptual relationships. Indeed, two different values are not necessary conflicting. They may represent the same concept, but with different levels of detail. In order to integrate this kind of knowledge into existing approaches, we use the mathematical models of partial order. Then, we consider recurrent patterns that can be derived from ontologies. This additional information indeed reinforces the confidence in certain values when certain recurrent patterns are observed. In this case, we model recurrent patterns using rules. Experiments that were conducted both on synthetic and real-world datasets show that a priori knowledge enhances existing models and paves the way towards a more reliable information world. Source code as well as synthetic and real-world datasets are freely available
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Denaxas, Spiridon Christoforos. „A novel framework for integrating a priori domain knowledge into traditional data analysis in the context of bioinformatics“. Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492124.

Der volle Inhalt der Quelle
Annotation:
Recent advances in experimental technology have given scientists the ability to perform large-scale multidimensional experiments involving large data sets. As a direct implication, the amount of data that is being generated is rising in an exponential manner. However, in order to fully scrutinize and comprehend the results obtained from traditional data analysis approaches, it has been proven that a priori domain knowledge must be taken into consideration. Infusing existing knowledge into data analysis operations however is a non-trivial task which presents a number of challenges. This research is concerned into utilizing a structured ontology representing the individual elements composing such large data sets for assessing the results obtained. More specifically, statistical natural language processing and information retrieval methodologies are used in order to provide a seamless integration of existing domain knowledge in the context of cluster analysis experiments on gene product expression patterns. The aim of this research is to produce a framework for integrating a priori domain knowledge into traditional data analysis approaches. This is done in the context of DNA microarrays and gene expression experiments. The value added by the framework to the existing body of research is twofold. First, the framework provides a figure of merit score for assessing and quantifying the biological relatedness between individual gene products. Second, it proposes a mechanism for evaluating the results of data clustering algorithms from a biological point of view.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Katragadda, Mohit. „Development of flame surface density closure for turbulent premixed flames based on a priori analysis of direct numerical simulation data“. Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2195.

Der volle Inhalt der Quelle
Annotation:
In turbulent premixed flames the modelling of mean or filtered reac tion rate translates to the closure of flame surface to volume ratio, which is commonly referred to as the Flame Surface Density(FSD). The FSD based reaction rate closure is well established in the context of Reynolds Averaged Navier-Stokes (RANS) simulations for unity Lewis numbers. However, models for FSD in context of Large Eddy Simulations (LES) are relatively rare. In this study three-dimensional Direct Numerical Simulations (DNS) of freely propagating statistically planar premixed flames encompassing a range of different turbulent Reynolds numbers and global Lewis numbers was used. The variation of turbulent Reynolds number has been brought about by modifying the Karlovitz and the Damkohler numbers independently of each other. The DNS data has been explicitly Reynolds averaged and LES filtered for a prior assessment of existing FSD models and for the purpose of proposing new models where necessary.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rouault-Pic, Sandrine. „Reconstruction en tomographie locale : introduction d'information à priori basse résolution“. Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005016.

Der volle Inhalt der Quelle
Annotation:
Un des objectifs actuel en tomographie est de réduire la dose injectée au patient. Les nouveaux systèmes d'imagerie, intégrant des détecteurs haute résolution de petites tailles ou des sources fortement collimatées permettent ainsi de réduire la dose. Ces dispositifs mettent en avant le problème de reconstruction d'image à partir d'informations locales. Un moyen d'aborder le problème de tomographie locale est d'introduire une information à priori, afin de lever la non-unicité de la solution. Nous proposons donc de compléter les projections locales haute résolution (provenant de systèmes décrits précédemment) par des projections complètes basse résolution, provenant par exemple d'un examen scanner standard. Nous supposons que la mise en correspondance des données a été effectuée, cette partie ne faisant pas l'objet de notre travail. Nous avons dans un premier temps, adapté des méthodes de reconstruction classiques (ART, Gradient conjugué régularisé et Rétroprojection filtrée) au problème local, en introduisant dans le processus de reconstruction l'information à priori. Puis, dans un second temps, nous abordons les méthodes de reconstruction par ondelettes et nous proposons également une adaptation à notre problème. Dans tous les cas, la double résolution apparait également dans l'image reconstruite, avec une résolution plus fine dans la région d'intérêt. Enfin, étant donné le coût élevé des méthodes mises en oeuvre, nous proposons une parallélisation des algorithmes implémentés.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "A priori data"

1

Stroobandt, Dirk. A priori wire length estimates for digital design. Boston: Kluwer Academic Publishers, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kovtoni͡uk, N. F. Fotochuvstvitelʹnye MDP-pribory dli͡a preobrazovanii͡a izobrazheniĭ. Moskva: "Radio i svi͡azʹ", 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Henryk, Maciejewski. Predictive modelling in high-dimensional data: Prior domain knowledge-based approaches. Wrocław: Oficyna Wydawnicza Politechniki Wrocławskiej, 2013.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

D, Orli͡a︡nskiĭ A., und Gosudarstvennyĭ komitet SSSR po gidrometeorologii i kontroli͡u︡ prirodnoĭ sredy., Hrsg. Pribory, ustanovki, avtomatizat͡s︡ii͡a︡ v ėksperimentalʹnoĭ meteorologii. Moskva: Moskovskoe otd-nie Gidrometeoizdata, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

S, Kaniovskiĭ S., Bodner Vasiliĭ Afanasʹevich, Seleznev A. V und Moskovskiĭ institut priborostroenii͡a︡, Hrsg. Tochnye pribory i izmeritelʹnye sistemy. Moskva: MIP, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

S, Kaniovskiĭ S., und Moskovskiĭ institut priborostroenii͡a︡, Hrsg. Tochnye pribory i izmeritelʹnye sistemy. Moskva: MIP, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Gorn, L. S. Programmno-upravli͡a︡emye pribory i kompleksy dli͡a︡ izmerenii͡a︡ ionizirui͡u︡shchego izluchenii͡a︡. Moskva: Ėnergoatomizdat, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gorn, L. S. Programmno-upravli︠a︡emye pribory i kompleksy dli︠a︡ izmerenii︠a︡ ionizirui︠u︡shchego izluchenii︠a︡. Moskva: Ėnergoatomizdat, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Scott Jones, Julie. Learn to Clean and Prepare Scale Data Prior to Descriptive Analysis Using Data From the General Social Survey (2018). 1 Oliver's Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications, Ltd., 2022. http://dx.doi.org/10.4135/9781529605105.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Scott Jones, Julie. Learn to Clean and Prepare Categorical Data Prior to Descriptive Analysis Using Data From the General Social Survey (2018). 1 Oliver's Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications, Ltd., 2022. http://dx.doi.org/10.4135/9781529605037.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "A priori data"

1

Racke, Reinhard. „Weighted a priori estimates for small data“. In Lectures on Nonlinear Evolution Equations, 84–90. Wiesbaden: Vieweg+Teubner Verlag, 1992. http://dx.doi.org/10.1007/978-3-663-10629-6_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Racke, Reinhard. „Weighted a priori estimates for small data“. In Lectures on Nonlinear Evolution Equations, 82–88. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21873-1_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Roberts, R. A. „Limited Data Tomography Using Support Minimization with a Priori Data“. In Review of Progress in Quantitative Nondestructive Evaluation, 749–56. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3344-3_96.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Boari, Giuseppe, Gabriele Cantaluppi und Marta Nai Ruscone. „Scale Reliability Evaluation for A-Priori Clustered Data“. In Studies in Classification, Data Analysis, and Knowledge Organization, 37–45. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-06692-9_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Cong, Tonghui Wang, David Trafimow, Hui Li, Liqun Hu und Abigail Rodriguez. „Extending the A Priori Procedure (APP) to Address Correlation Coefficients“. In Data Science for Financial Econometrics, 141–49. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48853-6_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Roberts, R. A., und O. Ertekin. „Support Minimized Limited View CT Using a Priori Data“. In Review of Progress in Quantitative Nondestructive Evaluation, 373–80. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2848-7_48.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Temme, T., und R. Decker. „Analysis of A Priori Defined Groups in Applied Market Research“. In Studies in Classification, Data Analysis, and Knowledge Organization, 529–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-60187-3_57.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fujita, Hamido, und Yu-Chien Ko. „A Priori Membership for Data Representation: Case Study of SPECT Heart Data Set“. In Recent Advances in Intelligent Engineering, 65–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14350-3_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ásmundsdóttir, Rúna, Yusen Chen und Henk J. van Zuylen. „Dynamic Origin–Destination Matrix Estimation Using Probe Vehicle Data as A Priori Information“. In Traffic Data Collection and its Standardization, 89–108. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-6070-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Orchel, Marcin. „Support Vector Regression with A Priori Knowledge Used in Order Execution Strategies Based on VWAP“. In Advanced Data Mining and Applications, 318–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25856-5_24.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "A priori data"

1

Tsitsipas, Athanasios, Pascal Schiessle und Lutz Schubert. „Scotty: Fast a priori Structure-based Extraction from Time Series“. In 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. http://dx.doi.org/10.1109/bigdata52589.2021.9671513.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Challal, Zakia, und Thouraya Bouabana-Tebibel. „A priori replica placement strategy in data grid“. In 2010 International Conference on Machine and Web Intelligence (ICMWI). IEEE, 2010. http://dx.doi.org/10.1109/icmwi.2010.5647925.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Conrad, Kevin L., John R. Galloway, William P. Irwin, Walter H. Delashmit, James T. Jack, Govindaraj Kuntimad, Maritza R. Muguira, Charles Q. Little und Ralph R. Peters. „Perception and Autonomous Navigation Using a Priori Data“. In SAE 2006 World Congress & Exhibition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2006. http://dx.doi.org/10.4271/2006-01-1160.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Razansky, Daniel, und Vasilis Ntziachristos. „Fluorescence molecular tomography using a priori photoacoustic data“. In Biomedical Optics (BiOS) 2008, herausgegeben von Alexander A. Oraevsky und Lihong V. Wang. SPIE, 2008. http://dx.doi.org/10.1117/12.764272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Itert, Lukasz, Wtodzislaw Duch und John Pestian. „Influence of a priori Knowledge on Medical Document Categorization“. In 2007 IEEE Symposium on Computational Intelligence and Data Mining. IEEE, 2007. http://dx.doi.org/10.1109/cidm.2007.368868.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chen, Jiangping, Rong Wang und Xuehua Tang. „An improved algorithm of a priori based on geostatistics“. In International Conference on Earth Observation Data Processing and Analysis, herausgegeben von Deren Li, Jianya Gong und Huayi Wu. SPIE, 2008. http://dx.doi.org/10.1117/12.815695.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kosaka, M., K. Koizumi und H. Shibata. „Automatic initialization of adaptive control using a priori data“. In 1999 European Control Conference (ECC). IEEE, 1999. http://dx.doi.org/10.23919/ecc.1999.7099316.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Samoylov, Alexey, Nikolay Sergeev, Margarita Kucherova und Boris Denisov. „Methodology of Big Data Integration from A Priori Unknown Heterogeneous Data Sources“. In the 2018 2nd International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3297156.3297249.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ruud, B. O., und T. A. Johansen. „Seismic Inversion Using Well-Log Data as a Priori Information“. In 64th EAGE Conference & Exhibition. European Association of Geoscientists & Engineers, 2002. http://dx.doi.org/10.3997/2214-4609-pdb.5.p292.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

„Reliability Evaluating Theory for Data Sample with Unknown Priori Information“. In 2018 3rd International Conference on Computer Science and Information Engineering. Clausius Scientific Press, 2018. http://dx.doi.org/10.23977/iccsie.2018.1054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "A priori data"

1

Xu, Ling, und Anthony Stentz. Cost-based Registration using A Priori Data for Mobile Robot Localization. Fort Belvoir, VA: Defense Technical Information Center, Januar 2008. http://dx.doi.org/10.21236/ada525649.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mock, Clara, Christopher Rinderspacher und Brandon McWilliams. Physics-Guided Neural Network for Regularization and Learning Unbalanced Data Sets: A Priori Prediction of Melt Pool Width Variation in Directed Energy Deposition. Aberdeen Proving Ground, MD: DEVCOM Army Research Laboratory, März 2023. http://dx.doi.org/10.21236/ad1196030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

McCall, Jamie, Natalie Prochaska und James Onorevole. Identifying Reasons for Small and Medium-Sized Firm Closures in North Carolina: An Exploratory Framework Leveraging Administrative Data. Carolina Small Business Development Fund, Dezember 2022. http://dx.doi.org/10.46712/firm.closure.reasons.

Der volle Inhalt der Quelle
Annotation:
Business failure is a natural part of the development lifecycle. In a healthy economy, the formations and dissolutions of small firms drive innovation through the process of creative destruction. However, an excessive level of involuntary closures lowers both economic mobility and community social capital. We partnered with the North Carolina Secretary of State’s Office (NCSOS) to identify factors that might be driving involuntary firm closures using administrative data. This analysis outlines our recommendation to use an exploratory open-ended survey instrument which targets dissolved firm owners. We believe the methodology is indicated due to the inherent challenges of getting survey data from this population. With a relatively small number of responses, an open-ended survey would allow for a hybrid-thematic analysis framework which combines a data-driven inductive approach with a deductive theoretical (a priori) template of codes. Our recommended analysis lens complements phenomenological qualitative inquiry by connecting the respondent’s open-ended answers to theories in the business failure literature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McDonagh, Marian S., Roger Chou, Jesse Wagner, Azrah Y. Ahmed, Benjamin J. Morasco, Suchitra Iyer und Devan Kansagara. Living Systematic Reviews: Practical Considerations for the Agency for Healthcare Research and Quality Evidence-based Practice Center Program. Agency for Healthcare Research and Quality (AHRQ), März 2022. http://dx.doi.org/10.23970/ahrqepcwhitepaperlsr.

Der volle Inhalt der Quelle
Annotation:
Living systematic reviews are a relatively new approach to keeping the evidence in systematic reviews current by frequent surveillance and updating. The Agency for Healthcare Research and Quality’s Evidence-based Practice Center Program recently commissioned a systematic review of plant-based treatments for chronic pain management. This white paper describes the team’s experience in implementing the protocol that was developed a priori, and reflects on the challenges faced and lessons learned in the process of developing and maintaining a living systematic review. Challenges related to scoping, conducting searches, selecting studies, abstracting data, assessing risk of bias, conducting meta-analysis, performing narrative synthesis, assessing strength of evidence, and generating conclusions are described, as well as potential approaches to addressing these challenges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Filipiak, Katarzyna, Dietrich von Rosen, Martin Singull und Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, März 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.

Der volle Inhalt der Quelle
Annotation:
In this paper least squares and maximum likelihood estimates under univariate and multivariate linear models with a priori information related to maximum effects in the models are determined. Both loss functions (the least squares and negative log-likelihood) and the constraints are convex, so the convex optimization theory can be utilized to obtain estimates, which in this paper are called Safety belt estimates. In particular, the complementary slackness condition, common in convex optimization, implies two alternative types of solutions, strongly dependent on the data and the restriction. It is experimentally shown that, despite of the similarity to the ridge regression estimation under the univariate linear model, the Safety belt estimates behave usually better than estimates obtained via ridge regression. Moreover, concerning the multivariate model, the proposed technique represents a completely novel approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Volpe Martincus, Christian, und Jerónimo Carballo. Beyond The Average Effects: The Distributional Impacts of Export Promotion Programs in Developing Countries. Inter-American Development Bank, August 2010. http://dx.doi.org/10.18235/0011214.

Der volle Inhalt der Quelle
Annotation:
Do all exporters benefit the same from export promotion programs? Surprisingly, not matter how obvious this question may a priori be when thinking of the effectiveness of these programs there is virtually no empirical evidence on how they affect export performance in different parts of the distribution of export outcomes. This paper aims at filling this gap in the literature. We assess the distributional impacts of trade promotion activities performing efficient semiparametric quantile treatment effect estimation on assistance, total sales, and highly disaggregated export data for the whole population of Chilean exporters over the period 2002-2006. We find that these activities have indeed heterogeneous effects over the distribution of export performance, along both the extensive and intensive margins. In particular, smaller firms as measured by their total exports seem to benefit more from export promotion actions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Volpe Martincus, Christian, und Jerónimo Carballo. Export Promotion Activities in Developing Countries: What kind of Trade Do They Promote? Inter-American Development Bank, August 2010. http://dx.doi.org/10.18235/0011217.

Der volle Inhalt der Quelle
Annotation:
Information problems involved in trading differentiated goods are a priori acuter than those associated with trading more homogeneous products. The impact of export promotion activities intending to address these problems can be therefore expected to differ across goods with different degree of differentiation. Empirical evidence on this respect is virtually inexistent. This paper aims at filling this gap in the literature by providing estimates of the effect of these activities over firms trading different goods using highly disaggregated export data for the whole population of Costa Rican exporters over the period 2001-2006. We find that trade promotion actions favor an increase of exports along the extensive margin, in particular, in terms of destination countries, in the case of firms that are already selling differentiated goods. However, these actions do not seem to encourage exporter to start exporting these goods. Further, no significant impacts are observed for firms exporting reference-priced and homogeneous goods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Juden, Matthew, Tichaona Mapuwei, Till Tietz, Rachel Sarguta, Lily Medina, Audrey Prost, Macartan Humphreys et al. Process Outcome Integration with Theory (POInT): academic report. Centre for Excellence and Development Impact and Learning (CEDIL), März 2023. http://dx.doi.org/10.51744/crpp5.

Der volle Inhalt der Quelle
Annotation:
This paper describes the development and testing of a novel approach to evaluating development interventions – the POInT approach. The authors used Bayesian causal modelling to integrate process and outcome data to generate insights about all aspects of the theory of change, including outcomes, mechanisms, mediators and moderators. They partnered with two teams who had evaluated or were evaluating complex development interventions: The UPAVAN team had evaluated a nutrition-sensitive agriculture intervention in Odisha, India, and the DIG team was in the process of evaluating a disability-inclusive poverty graduation intervention in Uganda. The partner teams’ theory of change were adapted into a formal causal model, depicted as a directed acyclic graph (DAG). The DAG was specified in the statistical software R, using the CausalQueries package, having extended the package to handle large models. Using a novel prior elicitation strategy to elicit beliefs over many more parameters than has previously been possible, the partner teams’ beliefs about the nature and strength of causal links in the causal model (priors) were elicited and combined into a single set of shared prior beliefs. The model was updated on data alone as well as on data plus priors to generate posterior models under different assumptions. Finally, the prior and posterior models were queried to learn about estimates of interest, and the relative role of prior beliefs and data in the combined analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Baltagi, Badi H., Georges Bresson, Anoop Chaturvedi und Guy Lacroix. Robust dynamic space-time panel data models using ε-contamination: An application to crop yields and climate change. CIRANO, Januar 2023. http://dx.doi.org/10.54932/ufyn4045.

Der volle Inhalt der Quelle
Annotation:
This paper extends the Baltagi et al. (2018, 2021) static and dynamic ε-contamination papers to dynamic space-time models. We investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, we consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner (1986)’s g-priors for the variance-covariance matrices. We propose a general “toolbox” for a wide range of specifications which includes the dynamic space-time panel model with random effects, with cross-correlated effects `a la Chamberlain, for the Hausman-Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using an extensive Monte Carlo simulation study, we compare the finite sample properties of our proposed estimator to those of standard classical estimators. We illustrate our robust Bayesian estimator using the same data as in Keane and Neal (2020). We obtain short run as well as long run effects of climate change on corn producers in the United States.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gentillon, Cynthia, Cory Atwood, Andrea Mack und Zhegang Ma. EVALUATION OF WEAKLY INFORMED PRIORS FOR FLEX DATA. Office of Scientific and Technical Information (OSTI), Mai 2020. http://dx.doi.org/10.2172/1693425.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie