Rozprawy doktorskie na temat „Understanding of data models”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Understanding of data models.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Understanding of data models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Sommeria-Klein, Guilhem. "From models to data : understanding biodiversity patterns from environmental DNA data". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30390/document.

Pełny tekst źródła
Streszczenie:
La distribution de l'abondance des espèces en un site, et la similarité de la composition taxonomique d'un site à l'autre, sont deux mesures de la biodiversité ayant servi de longue date de base empirique aux écologues pour tenter d'établir les règles générales gouvernant l'assemblage des communautés d'organismes. Pour ce type de mesures intégratives, le séquençage haut-débit d'ADN prélevé dans l'environnement (" ADN environnemental ") représente une alternative récente et prometteuse aux observations naturalistes traditionnelles. Cette approche présente l'avantage d'être rapide et standardisée, et donne accès à un large éventail de taxons microbiens jusqu'alors indétectables. Toutefois, ces jeux de données de grande taille à la structure complexe sont difficiles à analyser, et le caractère indirect des observations complique leur interprétation. Le premier objectif de cette thèse est d'identifier les modèles statistiques permettant d'exploiter ce nouveau type de données afin de mieux comprendre l'assemblage des communautés. Le deuxième objectif est de tester les approches retenues sur des données de biodiversité du sol en forêt amazonienne, collectées en Guyane française. Deux grands types de processus sont invoqués pour expliquer l'assemblage des communautés d'organismes : les processus "neutres", indépendants de l'espèce considérée, que sont la naissance, la mort et la dispersion des organismes, et les processus liés à la niche écologique occupée par les organismes, c'est-à-dire les interactions avec l'environnement et entre organismes. Démêler l'importance relative de ces deux types de processus dans l'assemblage des communautés est une question fondamentale en écologie ayant de nombreuses implications, notamment pour l'estimation de la biodiversité et la conservation. Le premier chapitre aborde cette question à travers la comparaison d'échantillons d'ADN environnemental prélevés dans le sol de diverses parcelles forestières en Guyane française, via les outils classiques d'analyse statistique en écologie des communautés. Le deuxième chapitre se concentre sur les processus neutres d'assemblages des communautés.[...]
Integrative patterns of biodiversity, such as the distribution of taxa abundances and the spatial turnover of taxonomic composition, have been under scrutiny from ecologists for a long time, as they offer insight into the general rules governing the assembly of organisms into ecological communities. Thank to recent progress in high-throughput DNA sequencing, these patterns can now be measured in a fast and standardized fashion through the sequencing of DNA sampled from the environment (e.g. soil or water), instead of relying on tedious fieldwork and rare naturalist expertise. They can also be measured for the whole tree of life, including the vast and previously unexplored diversity of microorganisms. Taking full advantage of this new type of data is challenging however: DNA-based surveys are indirect, and suffer as such from many potential biases; they also produce large and complex datasets compared to classical censuses. The first goal of this thesis is to investigate how statistical tools and models classically used in ecology or coming from other fields can be adapted to DNA-based data so as to better understand the assembly of ecological communities. The second goal is to apply these approaches to soil DNA data from the Amazonian forest, the Earth's most diverse land ecosystem. Two broad types of mechanisms are classically invoked to explain the assembly of ecological communities: 'neutral' processes, i.e. the random birth, death and dispersal of organisms, and 'niche' processes, i.e. the interaction of the organisms with their environment and with each other according to their phenotype. Disentangling the relative importance of these two types of mechanisms in shaping taxonomic composition is a key ecological question, with many implications from estimating global diversity to conservation issues. In the first chapter, this question is addressed across the tree of life by applying the classical analytic tools of community ecology to soil DNA samples collected from various forest plots in French Guiana. The second chapter focuses on the neutral aspect of community assembly.[...]
Style APA, Harvard, Vancouver, ISO itp.
2

Kivinen, Jyri Juhani. "Statistical models for natural scene data". Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8879.

Pełny tekst źródła
Streszczenie:
This thesis considers statistical modelling of natural image data. Obtaining advances in this field can have significant impact for both engineering applications, and for the understanding of the human visual system. Several recent advances in natural image modelling have been obtained with the use of unsupervised feature learning. We consider a class of such models, restricted Boltzmann machines (RBMs), used in many recent state-of-the-art image models. We develop extensions of these stochastic artificial neural networks, and use them as a basis for building more effective image models, and tools for computational vision. We first develop a novel framework for obtaining Boltzmann machines, in which the hidden unit activations co-transform with transformed input stimuli in a stable and predictable way throughout the network. We define such models to be transformation equivariant. Such properties have been shown useful for computer vision systems, and have been motivational for example in the development of steerable filters, a widely used classical feature extraction technique. Translation equivariant feature sharing has been the standard method for scaling image models beyond patch-sized data to large images. In our framework we extend shallow and deep models to account for other kinds of transformations as well, focusing on in-plane rotations. Motivated by the unsatisfactory results of current generative natural image models, we take a step back, and evaluate whether they are able to model a subclass of the data, natural image textures. This is a necessary subcomponent of any credible model for visual scenes. We assess the performance of a state- of-the-art model of natural images for texture generation, using a dataset and evaluation techniques from in prior work. We also perform a dissection of the model architecture, uncovering the properties important for good performance. Building on this, we develop structured extensions for more complicated data comprised of textures from multiple classes, using the single-texture model architecture as a basis. These models are shown to be able to produce state-of-the-art texture synthesis results quantitatively, and are also effective qualitatively. It is demonstrated empirically that the developed multiple-texture framework provides a means to generate images of differently textured regions, more generic globally varying textures, and can also be used for texture interpolation, where the approach is radically dfferent from the others in the area. Finally we consider visual boundary prediction from natural images. The work aims to improve understanding of Boltzmann machines in the generation of image segment boundaries, and to investigate deep neural network architectures for learning the boundary detection problem. The developed networks (which avoid several hand-crafted model and feature designs commonly used for the problem), produce the fastest reported inference times in the literature, combined with state-of-the-art performance.
Style APA, Harvard, Vancouver, ISO itp.
3

Steinberg, Daniel. "An Unsupervised Approach to Modelling Visual Data". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9415.

Pełny tekst źródła
Streszczenie:
For very large visual datasets, producing expert ground-truth data for training supervised algorithms can represent a substantial human effort. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. The primary motivation for this thesis comes from the problem of labelling large visual datasets of the seafloor obtained by an Autonomous Underwater Vehicle (AUV) for ecological analysis. It is expensive to label this data, as taxonomical experts for the specific region are required, whereas automatically generated summaries can be used to focus the efforts of experts, and inform decisions on additional sampling. The contributions in this thesis arise from modelling this visual data in entirely unsupervised ways to obtain comprehensive visual summaries. Firstly, popular unsupervised image feature learning approaches are adapted to work with large datasets and unsupervised clustering algorithms. Next, using Bayesian models the performance of rudimentary scene clustering is boosted by sharing clusters between multiple related datasets, such as regular photo albums or AUV surveys. These Bayesian scene clustering models are extended to simultaneously cluster sub-image segments to form unsupervised notions of “objects” within scenes. The frequency distribution of these objects within scenes is used as the scene descriptor for simultaneous scene clustering. Finally, this simultaneous clustering model is extended to make use of whole image descriptors, which encode rudimentary spatial information, as well as object frequency distributions to describe scenes. This is achieved by unifying the previously presented Bayesian clustering models, and in so doing rectifies some of their weaknesses and limitations. Hence, the final contribution of this thesis is a practical unsupervised algorithm for modelling images from the super-pixel to album levels, and is applicable to large datasets.
Style APA, Harvard, Vancouver, ISO itp.
4

Das, Debasish. "Bayesian Sparse Regression with Application to Data-driven Understanding of Climate". Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/313587.

Pełny tekst źródła
Streszczenie:
Computer and Information Science
Ph.D.
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables.
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
5

LaMar, Michelle Marie. "Models for understanding student thinking using data from complex computerized science tasks". Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3686374.

Pełny tekst źródła
Streszczenie:

The Next Generation Science Standards (NGSS Lead States, 2013) define performance targets which will require assessment tasks that can integrate discipline knowledge and cross-cutting ideas with the practices of science. Complex computerized tasks will likely play a large role in assessing these standards, but many questions remain about how best to make use of such tasks within a psychometric framework (National Research Council, 2014). This dissertation explores the use of a more extensive cognitive modeling approach, driven by the extra information contained in action data collected while students interact with complex computerized tasks. Three separate papers are included. In Chapter 2, a mixture IRT model is presented that simultaneously classifies student understanding of a task while measuring student ability within their class. The model is based on differentially scoring the subtask action data from a complex performance. Simulation studies show that both class membership and class-specific ability can be reasonably estimated given sufficient numbers of items and response alternatives. The model is then applied to empirical data from a food-web task, providing some evidence of feasibility and validity. Chapter 3 explores the potential of using a more complex cognitive model for assessment purposes. Borrowing from the cognitive science domain, student decisions within a strategic task are modeled with a Markov decision process. Psychometric properties of the model are explored and simulation studies report on parameter recovery within the context of a simple strategy game. In Chapter 4 the Markov decision process (MDP) measurement model is then applied to an educational game to explore the practical benefits and difficulties of using such a model with real world data. Estimates from the MDP model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

Style APA, Harvard, Vancouver, ISO itp.
6

Maloo, Akshay. "Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25296.

Pełny tekst źródła
Streszczenie:
Dynamic Behavior Visualizer (DBV) is a visual analytics environment to visualize the spatial and temporal movements and behavioral changes of an individual or a group, e.g. family within a realistic urban environment. DBV is specifically designed to visualize the adaptive behavioral changes, as they pertain to the interactions with multiple inter-dependent infrastructures, in the aftermath of a large crisis, e.g. hurricane or the detonation of an improvised nuclear device. DBV is web-enabled and thus is easily accessible to any user with access to a web browser. A novel aspect of the system is its scale and fidelity. The goal of DBV is to synthesize information and derive insight from it; detect the expected and discover the unexpected; provide timely and easily understandable assessment and the ability to piece together all this information.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
7

Izumi, Kenji. "Application of Paleoenvironmental Data for Testing Climate Models and Understanding Past and Future Climate Variations". Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18510.

Pełny tekst źródła
Streszczenie:
Paleo data-model comparison is the process of comparing output from model simulations of past periods with paleoenvironmental data. It enables us to understand both the paleoclimate mechanism and responses of the earth environment to the climate and to evaluate how models work. This dissertation has two parts that each involve the development and application of approaches for data-model comparisons. In part 1, which is focused on the understanding of both past and future climatic changes/variations, I compare paleoclimate and historical simulations with future climate projections exploiting the fact that climate-model configurations are exactly the same in the paleo and future simulations in the Coupled Model Intercomparison Project Phase 5. In practice, I investigated large-scale temperature responses (land-ocean contrast, high-latitude amplification, and change in temperature seasonality) in paleo and future simulations, found broadly consistent relationships across the climate states, and validated the responses using modern observations and paleoclimate reconstructions. Furthermore, I examined the possibility that a small set of common mechanisms controls the large-scale temperature responses using a simple energy-balance model to decompose the temperature changes shown in warm and cold climate simulations and found that the clear-sky longwave downward radiation is a key control of the robust responses. In part 2, I applied the equilibrium terrestrial biosphere models, BIOME4 and BIOME5 (developed from BIOME4 herein), for reconstructing paleoclimate. I applied inverse modeling through the iterative forward-modeling (IMIFM) approach that uses the North American vegetation data to infer the mid-Holocene (MH, 6000 years ago) and the Last Glacial Maximum (LGM, 21,000 years ago) climates that control vegetation distributions. The IMIFM approach has the potential to provide more accurate quantitative climate estimates from pollen records than statistical approaches. Reconstructed North American MH and LGM climate anomaly patterns are coherent and consistent between variables and between BIOME4 and BIOME5, and these patterns are also consistent with previous data synthesis. This dissertation includes previously published and unpublished coauthored material.
Style APA, Harvard, Vancouver, ISO itp.
8

Lipecki, Johan, i Viggo Lundén. "The Effect of Data Quantity on Dialog System Input Classification Models". Thesis, KTH, Hälsoinformatik och logistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237282.

Pełny tekst źródła
Streszczenie:
This paper researches how different amounts of data affect different word vector models for classification of dialog system user input. A hypothesis is tested that there is a data threshold for dense vector models to reach the state-of-the-art performance that have been shown with recent research, and that character-level n-gram word-vector classifiers are especially suited for Swedish classifiers–because of compounding and the character-level n-gram model ability to vectorize out-of-vocabulary words. Also, a second hypothesis is put forward that models trained with single statements are more suitable for chat user input classification than models trained with full conversations. The results are not able to support neither of our hypotheses but show that sparse vector models perform very well on the binary classification tasks used. Further, the results show that 799,544 words of data is insufficient for training dense vector models but that training the models with full conversations is sufficient for single statement classification as the single-statement- trained models do not show any improvement in classifying single statements.
Detta arbete undersöker hur olika datamängder påverkar olika slags ordvektormodeller för klassificering av indata till dialogsystem. Hypotesen att det finns ett tröskelvärde för träningsdatamängden där täta ordvektormodeller när den högsta moderna utvecklingsnivån samt att n-gram-ordvektor-klassificerare med bokstavs-noggrannhet lämpar sig särskilt väl för svenska klassificerare söks bevisas med stöd i att sammansättningar är särskilt produktiva i svenskan och att bokstavs-noggrannhet i modellerna gör att tidigare osedda ord kan klassificeras. Dessutom utvärderas hypotesen att klassificerare som tränas med enkla påståenden är bättre lämpade att klassificera indata i chattkonversationer än klassificerare som tränats med hela chattkonversationer. Resultaten stödjer ingendera hypotes utan visar istället att glesa vektormodeller presterar väldigt väl i de genomförda klassificeringstesterna. Utöver detta visar resultaten att datamängden 799 544 ord inte räcker till för att träna täta ordvektormodeller väl men att konversationer räcker gott och väl för att träna modeller för klassificering av frågor och påståenden i chattkonversationer, detta eftersom de modeller som tränats med användarindata, påstående för påstående, snarare än hela chattkonversationer, inte resulterar i bättre klassificerare för chattpåståenden.
Style APA, Harvard, Vancouver, ISO itp.
9

Abufouda, Mohammed [Verfasser], i Katharina [Akademischer Betreuer] Zweig. "Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics / Mohammed Abufouda ; Betreuer: Katharina Zweig". Kaiserslautern : Technische Universität Kaiserslautern, 2020. http://d-nb.info/1221599747/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wojatzki, Michael Maximilian [Verfasser], i Torsten [Akademischer Betreuer] Zesch. "Computer-assisted understanding of stance in social media : formalizations, data creation, and prediction models / Michael Maximilian Wojatzki ; Betreuer: Torsten Zesch". Duisburg, 2019. http://d-nb.info/1177681471/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Lee, Xing Ju. "Statistical and simulation modelling for enhanced understanding of hospital pathogen and related health issues". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103762/1/Xing%20Ju_Lee_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis investigated the temporal occurrence and transmission of within hospital pathogens using appropriate statistical and simulation models applied to imperfect hospital data. The research provides new insights into the transmission dynamics of methicillin-resistant Staphylococcus aureus within a hospital ward to assist infection control and prevention efforts. Additionally, appropriate statistical methods are identified to analyse hospital infection data which take into account the intricacies and potential limitations of such data.
Style APA, Harvard, Vancouver, ISO itp.
12

Green, Daniel. "Understanding urban rainfall-runoff responses using physical and numerical modelling approaches". Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33530.

Pełny tekst źródła
Streszczenie:
This thesis provides a novel investigation into rainfall-runoff processes occurring within a unique two-tiered depth-driven overland flow physical modelling environment, as well as within a numerical model context where parameterisation and DEM/building resolution influences have been investigated using an innovative de-coupled methodology. Two approaches to simulating urban rainfall-runoff responses were used. Firstly, a novel, 9 m2 physical modelling environment consisting of a: (i) a low-cost rainfall simulator component able to simulate consistent, uniformly distributed rainfall events of varying duration and intensity, and; (ii) a modular plot surface layer was used. Secondly, a numerical hydroinundation model (FloodMap2D-HydroInundation) was used to simulate a short-duration, high intensity surface water flood event (28th June 2012, Loughborough University campus). The physical model showed sensitivities to a number of meteorological and terrestrial factors. Results demonstrated intuitive model sensitivity to increasing the intensity and duration of rainfall, resulting in higher peak discharges and larger outflow volumes at the model outflow unit, as well as increases in the water depth within the physical model plot surface. Increases in percentage permeability were also shown to alter outflow flood hydrograph shape, volume, magnitude and timing due to storages within the physical model plot. Thus, a reduction in the overall volume of water received at the outflow hydrograph and a decrease in the peak of the flood event was observed with an increase in permeability coverage. Increases in the density of buildings resulted in a more rapid receding limb of the hydrograph and a steeper rising limb, suggesting a more rapid hydrological response. This indicates that buildings can have a channelling influence on surface water flows as well as a blockage effect. The layout and distribution of permeable elements was also shown to affect the rainfall-runoff response recorded at the model outflow, with downstream concentrated permeability resulting in statistically different hydrograph outflow data, but the layout of buildings was not seen to result in significant changes to the outflow flood hydrographs; outflow hydrographs appeared to only be influenced by the actual quantity and density of buildings, rather than their spatial distribution and placement within the catchment. Parameterisation of hydraulic (roughness) and hydrological (drainage rate, infiltration and evapotranspiration) model variables, and the influence of mesh resolution of elevation and building elements on surface water inundation outputs, both at the global and local level, were studied. Further, the viability of crowdsourced approaches to provide external model validation data in conjunction with dGPS water depth data was assessed. Parameterisation demonstrated that drainage rate changes within the expected range of parameter values resulted in considerable losses from the numerical model domain at global and local scales. Further, the model was also shown to be moderately sensitive to hydraulic conductivity and roughness parameterisation at both scales of analysis. Conversely, the parameterisation of evapotranspiration demonstrated that the model was largely insensitive to any changes of evapotranspiration rates at the global and local scales. Detailed analyses at the hotspot level were critical to calibrate and validate the numerical model, as well as allowing small-scale variations to be understood using at-a-point hydrograph assessments. A localised analysis was shown to be especially important to identify the effects of resolution changes in the DEM and buildings which were shown to be spatially dependent on the density, presence, size and geometry of buildings within the study site. The resolution of the topographic elements of a DEM were also shown to be crucial in altering the flood characteristics at the global and localised hotspot levels. A novel de-coupled investigation of the elevation and building components of the DEM in a strategic matrix of scenarios was used to understand the independent influence of building and topographic mesh resolution effects on surface water flood outputs. Notably, the inclusion of buildings on a DEM surface was shown to have a considerable influence on the distribution of flood waters through time (regardless of resolution), with the exclusion of buildings from the DEM grid being shown to produce less accurate results than altering the overall resolution of the horizontal DEM grid cells. This suggests that future surface water flood studies should focus on the inclusion and representation of buildings and structural features present on the DEM surface as these have a crucial role in modifying rainfall-runoff responses. Focus on building representation was shown to be more vital than concentrating on advances in the horizontal resolution of the grid cells which make up a DEM, as a DEM resolution of 2 m was shown to be sufficiently detailed to conduct the urban surface water flood modelling undertaken, supporting previous inundation research.
Style APA, Harvard, Vancouver, ISO itp.
13

Alfadda, Dalal Abdulaziz. "How Does a ‘Model of Graphics’ Approach and Peer Tutoring Lead to Deep Understanding of Data Visualisation?" Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27203.

Pełny tekst źródła
Streszczenie:
This thesis describes how students with no knowledge of descriptive statistics and no experience with data visualisation software learned to produce such visualisations. It proposes a model of graphics approach that enables students to compose a description of a graph and move beyond standard chart types. Two quasi-experimental studies using a one-group pretest–posttest design were conducted with 62 third-year education students from two Australian universities. Intervention involved learning from exposition through an online module, including instructional videos, worked-out examples, problem-solving, and peer feedback. Participants learned to construct two chart types using the statistical software ggplot2. Participants’ utterances and screens were recorded and subjected to detailed qualitative analysis. The results suggest that the model of graphics and peer tutoring are effective learning methods for data visualisation production. Analysis of pretest to posttest gains showed significant increases in students’ scores, and the students demonstrated knowledge transfer. Qualitative analysis showed that during construction of the model of graphics, students engaged in active and constructive learning modes in terms of the Interactive–Constructive–Active–Passive (ICAP) theory. The findings suggest that having the tutor at a higher scoring level than the tutee is most beneficial. This research contributes to data science and statistics education. It is the first study to examine novices’ initial hours of learning about a theory of statistical graphs and their first use of ggplot2. It lays the ground for the model of graphics competencies and establishes a qualitative framework for analysing students’ learning and interaction. The findings contribute to the field of peer tutoring, particularly reciprocal peer tutoring. The impact of role sequence in reciprocal tutoring has been almost absent from the literature; this study emphasises the potential impact of first role (tutor or tutee) on learning and transfer.
Style APA, Harvard, Vancouver, ISO itp.
14

Cowburn, I. Malcolm, Victoria J. Lavis i Tammi Walker. "BME sex offenders in prison: the problem of participation in offending behaviour groupwork programmes: a tripartite model of understanding". De Montfort University and Sheffield Hallam University, 2008. http://hdl.handle.net/10454/2550.

Pełny tekst źródła
Streszczenie:
This paper addresses the under representation of Black and minority ethnic (BME) sex offenders in the sex offender treatment programme (SOTP) of the prisons of England and Wales. The proportional over representation of BME men in the male sex offender population of the prisons of England and Wales has been noted for at least ten years. Similarly the under representation of BME sex offenders in prison treatment programmes has been a cause for concern during the last decade. This paper presents current demographic data relating to male BME sex offenders in the prisons of England and Wales. The paper draws together a wide range of social and cultural theories to develop a tripartite model for understanding the dynamics underlying the non-participation of BME
Style APA, Harvard, Vancouver, ISO itp.
15

Wahi, Rabbani Rash-ha. "Towards an understanding of the factors associated with severe injuries to cyclists in crashes with motor vehicles". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121426/1/Rabbani%20Rash-Ha_Wahi_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis aimed to develop statistical models to overcome limitations in police-reported data to better understand the factors contributing to severe injuries in bicycle motor-vehicle crashes. In low-cycling countries such as Australia, collisions with motor vehicles are the major causes of severe injuries to cyclists and fear of collisions prevents many people from taking up cycling. The empirical results obtained from the models provide valuable insights to assist transport and enforcement agencies to improve cyclist safety.
Style APA, Harvard, Vancouver, ISO itp.
16

Holm, Henrik. "Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301313.

Pełny tekst źródła
Streszczenie:
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available.
Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
Style APA, Harvard, Vancouver, ISO itp.
17

Cox, Katrina M. "Understanding Brigham Young University's Technology Teacher Education Program's Sucess in Attracting and Retaining Female Students". Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1416.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Enqvist, Juulia. "Developing an understanding of users through an insights generation model : How insights about users can be generated from a variety of sources available in an organization". Thesis, Uppsala universitet, Institutionen för informatik och media, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331969.

Pełny tekst źródła
Streszczenie:
User centered design is a process which aims to understand user needs and desires by using different tools and methods. This is challenging in the industry as companies have different goals compared to the academic discipline of user centered design. As companies have different goals, common UCD methods which are used in the academic field are often not used. Therefore, there is a gap in how UCD is done in practice compared to theory. Designers and user experience specialists must use the tools which are available, capitalize on the opportunity to use existing resources in the organization in order to understand users and their needs. Insights explain the why and the motivation of the consumer or user, and they are less apparent and intangible, hidden truths that result from continuous digging. Insights can be draw from several different sources, from data and qualitative sources. This thesis investigates from what available sources in an organization can insights be generated from in order to understand users and design better experiences, specifically from the organizations perspective. The purpose is not only to understand users but to drive the organization’s objectives and goals. This thesis uses an innovative collaborative workshop methodology, working with digital designers, to answer the research questions and as a result presents an insights generation model. The research has been specifically conducted for an organization, and from their available sources, but the methodology and model creation has the potential to be used in similar settings, domains or projects.
Style APA, Harvard, Vancouver, ISO itp.
19

Beillevaire, Marc. "Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model : How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229667.

Pełny tekst źródła
Streszczenie:
Machine learning models are becoming more and more powerful and accurate, but their good predictions usually come with a high complexity. Depending on the situation, such a lack of interpretability can be an important and blocking issue. This is especially the case when trust is needed on the user side in order to take a decision based on the model prediction. For instance, when an insurance company uses a machine learning algorithm in order to detect fraudsters: the company would trust the model to be based on meaningful variables before actually taking action and investigating on a particular individual. In this thesis, several explanation methods are described and compared on multiple datasets (text data, numerical), on classification and regression problems.
Maskininlärningsmodellerna blir mer och mer kraftfulla och noggranna, men deras goda förutsägelser kommer ofta med en hög komplexitet. Beroende på situationen kan en sådan brist på tolkning vara ett viktigt och blockerande problem. Särskilt är det fallet när man behöver kunna lita på användarsidan för att fatta ett beslut baserat på modellprediktionen. Till exempel, ett försäkringsbolag kan använda en maskininlärningsalgoritm för att upptäcka bedrägerier, men företaget vill vara säkert på att modellen är baserad på meningsfulla variabler innan man faktiskt vidtar åtgärder och undersöker en viss individ. I denna avhandling beskrivs och förklaras flera förklaringsmetoder, på många dataset av typerna textdata och numeriska data, på klassificerings- och regressionsproblem.
Style APA, Harvard, Vancouver, ISO itp.
20

O'Shea, Philip James. "Stochastic models for speech understanding". Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337961.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Douzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.

Pełny tekst źródła
Streszczenie:
Chaque jour, les entreprises du monde entier reçoivent et traitent d'énormes volumes de documents, entraînant des coûts considérables. Pour réduire ces coûts, de grandes entreprises automatisent le traitement documentaire, visant une automatisation complète. Cette thèse se concentre sur l'utilisation de modèles d'apprentissage machine pour extraire des informations de documents. Les progrès récents en matière d'architecture de modèle, en particulier les transformeurs, ont révolutionné le domaine grâce à leur utilisation généralisée de l'attention et à l'amélioration des pré-entraînements auto-supervisés. Nous montrons que les transformeurs, pré-entraînés sur des documents, effectuent des tâches de compréhension de documents avec précision et surpassent les modèles à base de réseaux récurrents pour l'extraction d'informations par classification de mots. Les transformeurs nécessitent également moins de données d'entraînement pour atteindre des performances élevées, soulignant l'importance du pré-entraînement auto-supervisé. Dans la suite, nous introduisons des tâches de pré-entraînement spécifiquement adaptées aux documents d'entreprise, améliorant les performances même avec des modèles plus petits. Cela permet d'atteindre des niveaux de performance similaires à ceux de modèles plus gros, ouvrant la voie à des modèles plus petits et plus économiques. Enfin, nous abordons le défi du coût d'évaluation des transformeurs sur de longues séquences. Nous montrons que des architectures plus efficaces dérivées des transformeurs nécessitent moins de ressources et donnent de meilleurs résultats sur de longues séquences. Cependant, elles peuvent perdre légèrement en performance sur de courtes séquences par rapport aux transformeurs classiques. Cela suggère l'avantage d'utiliser plusieurs modèles en fonction de la longueur des séquences à traiter, ouvrant la possibilité de concaténer des séquences de différentes modalités
Every day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Style APA, Harvard, Vancouver, ISO itp.
22

Satkin, Scott. "Data-Driven Geometric Scene Understanding". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/280.

Pełny tekst źródła
Streszczenie:
In this thesis, we describe a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We develop a framework for auto-calibrating a camera, rendering 3D models from the viewpoint an image was taken, and computing a similarity measure between each 3D model and an input image. We demonstrate this data-driven approach in the context of geometry estimation and show the ability to find the identities, poses and styles of objects in a scene. We begin by presenting a proof-of-concept algorithm for matching 3D models with input images. Next, we present a series of extensions to this baseline approach. Our goals here are three-fold. First, we aim to produce more accurate reconstructions of a scene by determining both the exact style and size of objects as well as precisely localizing their positions. In addition, we aim to increase the robustness of our scene-matching approach by incorporating new features and expanding our search space to include many viewpoint hypotheses. Lastly, we address the computational challenges of our approach by presenting algorithms for more efficiently exploring the space of 3D scene hypotheses, without sacrificing the quality of results. We conclude by presenting various applications of our geometric scene understanding approach. We start by demonstrating the effectiveness of our algorithm for traditional applications such as object detection and segmentation. In addition, we present two novel applications incorporating our geometry estimates: affordance estimation and geometryaware object insertion for photorealistic rendering.
Style APA, Harvard, Vancouver, ISO itp.
23

Alsheikh, Sami Thabet. "Automated understanding of data visualizations". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112830.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-65).
When a person views a data visualization (graph, chart, infographic, etc.), they read the text and process the images to quickly understand the communicated message. This research works toward emulating this ability in computers. In pursuing this goal, we have explored three primary research objectives: 1) extracting and ranking the most relevant keywords in a data visualization 2) predicting a sensible topic and multiple subtopics for a data visualization, and 3) extracting relevant pictographs from a data visualization. For the first task, we create an automatic text extraction and ranking system which we evaluate on 202 MASSVIS data visualizations. For the last two objectives, we curate a more diverse and complex dataset, Visually. We devise a computational approach that automatically outputs textual and visual elements predicted representative of the data visualization content. Concretely, from the curated Visually dataset of 29K large infographic images sampled across 26 categories and 391 tags, we present an automated two step approach: first, we use extracted text to predict the text tags indicative of the infographic content, and second, we use these predicted text tags to localize the most diagnostic visual elements (what we have called "visual tags"). We report performances on a categorization and multi-label tag prediction problem and compare the results to human annotations. Our results show promise for automated human-like understanding of data visualizations.
by Sami Thabet Alsheikh.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
24

David-Rus, Richard. "Explanation and Understanding through Scientific Models". Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-111655.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Ladicky, Lubor. "Global structured models towards scene understanding". Thesis, Oxford Brookes University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543818.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Segal, Aleksandr V. "Iterative Local Model Selection for tracking and mapping". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:8690e0e0-33c5-403e-afdf-e5538e5d304f.

Pełny tekst źródła
Streszczenie:
The past decade has seen great progress in research on large scale mapping and perception in static environments. Real world perception requires handling uncertain situations with multiple possible interpretations: e.g. changing appearances, dynamic objects, and varying motion models. These aspects of perception have been largely avoided through the use of heuristics and preprocessing. This thesis is motivated by the challenge of including discrete reasoning directly into the estimation process. We approach the problem by using Conditional Linear Gaussian Networks (CLGNs) as a generalization of least-squares estimation which allows the inclusion of discrete model selection variables. CLGNs are a powerful framework for modeling sparse multi-modal inference problems, but are difficult to solve efficiently. We propose the Iterative Local Model Selection (ILMS) algorithm as a general approximation strategy specifically geared towards the large scale problems encountered in tracking and mapping. Chapter 4 introduces the ILMS algorithm and compares its performance to traditional approximate inference techniques for Switching Linear Dynamical Systems (SLDSs). These evaluations validate the characteristics of the algorithm which make it particularly attractive for applications in robot perception. Chief among these is reliability of convergence, consistent performance, and a reasonable trade off between accuracy and efficiency. In Chapter 5, we show how the data association problem in multi-target tracking can be formulated as an SLDS and effectively solved using ILMS. The SLDS formulation allows the addition of additional discrete variables which model outliers and clutter in the scene. Evaluations on standard pedestrian tracking sequences demonstrates performance competitive with the state of the art. Chapter 6 applies the ILMS algorithm to robust pose graph estimation. A non-linear CLGN is constructed by introducing outlier indicator variables for all loop closures. The standard Gauss-Newton optimization algorithm is modified to use ILMS as an inference algorithm in between linearizations. Experiments demonstrate a large improvement over state-of-the-art robust techniques. The ILMS strategy presented in this thesis is simple and general, but still works surprisingly well. We argue that these properties are encouraging for wider applicability to problems in robot perception.
Style APA, Harvard, Vancouver, ISO itp.
27

Wachsmuth, Sven. "Multi-modal scene understanding using probabilistic models". [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963782053.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

MARTINI, MASSIMO. "Deep Learning based models for Space Understanding". Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295461.

Pełny tekst źródła
Streszczenie:
La comprensione di uno spazio è sempre stata di grande interesse per la comunità scientifica, poiché è molto utilizzata in vari campi. Lo scopo di questa tesi è di migliorare lo stato dell'arte in tutti gli aspetti dello Space Understanding, sia statici che dinamici. Inizialmente, viene descritto un nuovo metodo di deep learning per la segmentazione semantica di nuvole di punti di uno spazio. Esso utilizza feature aggiuntive più discriminanti, rispetto allo stato dell'arte. Sono stati condotti esperimenti sia nel dominio dei beni culturali che nelle scene indoor. Infine, approcci generativi vengono proposti come tecnica di data augmentation. L'affidabilità dei metodi viene valutata su un nuovo dataset e i risultati ottenuti migliorano lo stato dell'arte. La DGCNN-Mod proposta aumenta di 26,86% e 4,37% l'accuratezza sulle 2 scene di test di ArCH dataset, rispetto alla DGCNN. Feature handcrafted aiutano ad ottenere 28,44% e 6,21% di accuratezza. Poi, viene proposta una metodologia mista tra approcci di ML e DL per il compito di Change Detection su scene dinamice. Essa sfrutta l'estrazione di feature visive e testuali, provenienti da immagini RGB nell’ambito Retail. La loro unione serve per addestrare un classificatore che darà un risultato complessivo sullo stato dello spazio. L'affidabilità dei metodi proposti viene testata utilizzando un nuovo dataset ed analizzando i consumatori. Infine, viene presentato un algoritmo per la re-identificazione delle persone utilizzando video rgb-d, con una configurazione top-view. Esso è progettato per funzionare sia in ambiente chiuso che in un ambiente aperto. Inoltre, integra caratteristiche visive, spaziali e temporali. Questo metodo è stato validato acquisendo due nuovi dataset, sia nell’ambito del Retail che per un ambiente museale. I risultati ottenuti mostrano miglioramenti rispetto agli approcci dello stato dell'arte. La TVOW proposta migliora del 2,72% l'accuratezza sul nuovo dataset TVPR2 e del 0,45% l'accuratezza su TVPR.
The understanding of a space has always been of great interest to the scientific community, as it is widely used in various fields. The aim of this research is to improve state of the art regarding all the aspects for Space Undertanding, both static and dynamic. Initially, a new deep learning method for Point Cloud Semantic Segmentation of a Space is described. This approach uses additional discriminative features, compared to the state of the art. Experiments were carried out both in cultural heritage field and indoor scenes. Then, generative approaches are proposed as data augmentation technique. The reliability of the methods was evaluated on a novel dataset and the results obtained showed that the methods outperformed the state of the art. The proposed DGCNN-Mod increases the accuracy on the 2 test scenes of ArCH dataset by 26.86% and 4.37%, compared to DGCNN. Handcrafted features allow to achieve 28.44% and 6.21%. Then, a mixed methodology between ML and DL approaches is proposed for the Change Detection task on a dynamic scene. It exploits the extraction of visual and textual features, coming from the acquisition of RGB images of a retail environment. Their union allows to train a final classifier that gives an overall result about the state of a space. The reliability of the proposed methods was investigated using a novel dataset and studying the behaviour of consumers. Finally, an algorithm for Person Re-Identification using RGB-D videos, with a top-view configuration, has been described. It is designed to work both in closed and open world environment. It also integrates visual, spatial and temporal features. This method has been validated by acquiring two new datasets of RGB-D video, for both a retail and a museum environment. Results obtained showed that the methods outperformed the state of the art approaches. In fact, the proposed TVOW improves the accuracy on the new TVPR2 dataset by 2.72% and the accuracy on the TVPR dataset by 0.45%.
Style APA, Harvard, Vancouver, ISO itp.
29

Liu, Jiali. "Data expression : understanding and supporting alternatives in data analysis processes". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT022.

Pełny tekst źródła
Streszczenie:
Pour bien comprendre les données, les analystes considèrent différents types d’alternatives: Ils explorent diverses hypothèses, essayent différents types de méthodes et expérimentent un large éventail de solutions possibles. Ces alternatives s’influencent mutuellement dans un processus dynamique et complexe de “sensemaking”. Pourtant, les outils d’analyse actuels considèrent rarement ces alternatives comme une partie intégrante de l’analyse, ce qui rend le processus lourds et cognitif exigeants. En appliquant diverses méthodes empiriques et conceptions d’outils, nous répondons aux questions : (1) Quelles sont les alternatives et comment s’intègrent-elles dans le processus de création de sens ? et (2) comment les outils peuvent-ils mieux soutenir l’exploration et la gestion des alternatives ? Cette thèse comprend trois parties : La partie I explore le rôle des alternatives à travers des entretiens et des observations avec des analystes. Sur la base des résultats et de notre analyse, nous apportons des caractérisations des alternatives et un cadre pour aider à les décrire et à les raisonner. La partie II se concentre sur le soutien des alternatives dans le contexte du ”affinity diagramming” pour l’analyse des données qualitatives. Sur la base des entretiens avec des praticiens et à notre propre expérience, nous proposons un design space pour caractériser les différents types d’alternatives engagées dans un tel processus de sensemaking. Nous fournissons une vision et un système de preuve de concept, ADQDA, pour montrer comment les analystes peuvent effectuer des transitions fluides entre des phases d’analyse, des méthodes et des représentations alternatives, et comment ils peuvent s’approprier de manière flexible divers dispositifs pour s’adapter aux tâches à accomplir ou pour étendre l’espace d’analyse. La troisième partie traite des alternatives dans le contexte de la réutilisation. Nous envisageons une nouvelle technique de réutilisation, la “computational transclusion”, qui maintient divers liens dynamiques entre le contenu original et le contenu réutilisé (les alternatives) pour faciliter le suivi et la coordination des changements. Nous avons construit un système pour sonder différents scénarios de réutilisation et explorer les différents ”links” entre les alternatives et leurs réifications possibles dans les interfaces utilisateurs
To make sense of data, analysts consider different kinds of alternatives: they explore diverse sets of hypotheses, try out different types of methods, and experiment with a broad space of possible solutions. These alternatives influence each other within a dynamic and complex sensemaking process. Current analytic tools, however, rarely consider such alternatives as an integral part of the analysis, making the process cumbersome and cognitively demanding. Applying various empirical methods and tool designs, we address the following questions: (1) What are alternatives and how do they fit within the sensemaking process?And (2) how can tools better support the exploration and management of alternatives? This dissertation contains three parts: Part I explores the role of alternatives through interviews and observations with analysts. Based on the results and our analysis, we contribute characterisations of alternatives and a framework to help describe and reason about them. Part II focuses on supporting alternatives in the context of affinity diagramming for qualitative data analysis. Through interviews with practitioners and combined with our own experience, we propose a design space to characterise the various kinds of alternatives engaged in such sensemaking process.We further provide a vision and proof-of-concept system, ADQDA, to show how analysts can fluidly transition between alternative analysis phases, methods, representations, and how they can flexibly appropriate various devices to suit for the tasks at hand or to extend the analysis space. Part III discusses alternatives in the context of reuse. We envision a novel reuse technique, ”computational transclusion”, which maintains various dynamic links between the original and the reused contents (the alternatives) to facilitate tracking and coordinating changes.We built a sandbox system to probe into different reuse scenarios and explore the various links between alternatives and their possible reifications in notebook-ish user interface
Style APA, Harvard, Vancouver, ISO itp.
30

Modur, Sharada P. "Missing Data Methods for Clustered Longitudinal Data". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1274876785.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Borgelt, Christian. "Data mining with graphical models". [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=962912107.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Osuna, Echavarría Leyre Estíbaliz. "Semiparametric Bayesian Count Data Models". Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25573.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Sanz-Alonso, Daniel. "Assimilating data into mathematical models". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/83231/.

Pełny tekst źródła
Streszczenie:
Chapter 1 is a brief overview of the Bayesian approach to blending mathematical models with data. For this introductory chapter, I do not claim any originality in the material itself, but only in the presentation, and in the choice of contents. Chapters 2, 3 and 4 are transcripts of published and submitted papers, with minimal cosmetic modifications. I now detail my contributions to each of these papers. Chapter 2 is a transcript of the published paper Long-time Asymptotics of the Filtering Distribution for Partially Observed Chaotic Dynamical Systems" [Sanz-Alonso and Stuart, 2015] written in collaboration with Andrew Stuart. The idea of building a unified framework for studying filtering of chaotic dissipative dynamical systems is from Andrew. My ideas include the truncation of the 3DVAR algorithm that allows for unbounded observation noise, using the squeezing property as the unifying arch across all models, and most of the links with control theory. I stated and proved all the results of the paper. I also wrote the first version of the paper, which was subsequently much improved with Andrew's input. Chapter 3 is a transcript of the published paper \Filter Accuracy for the Lorenz 96 Model: Fixed Versus Adaptive Observation Operators" [Law et al., 2016], written in collaboration with Kody Law, Abhishek Shukla, and Andrew Stuart. My contribution to this paper was in proving most of the theoretical results. I did not contribute to the numerical experiments. The idea of using adaptive observation operators is from Abhishek. Chapter 4 is a transcript of the submitted paper\Importance Sampling: Computational Complexity and Intrinsic Dimension" [Agapiou et al., 2015], written in collaboration with Sergios Agapiou, Omiros Papaspiliopoulos, and Andrew Stuart. The idea of relating the two notions of intrinsic dimension described in the paper is from Omiros. Sergios stated and proved Theorem 4.2.3. Andrew's input was fundamental in making the paper well structured, and in the overall writing style. The paper was written very collaboratively among the four of us, and some of the results were the fruit of many discussions involving different subsets of authors. Some of my inputs include: the idea of using metrics between probability measures to study the performance of importance sampling, establishing connections to tempering, the analysis of singular limits both for inverse problems and filtering, most of the filtering section and in particular the use of the theory of inverse problems to analyze different proposals in the filtering set-up, the proof of Theorem 4.2.1, and substantial input in the proof of all the results of the paper not mentioned before. This paper aims to bring cohesion and new insights into a topic with a vast literature, and I helped towards this goal by doing most of the literature review involved.
Style APA, Harvard, Vancouver, ISO itp.
34

Pliuskuvienė, Birutė. "Adaptive data models in design". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_143940-41525.

Pełny tekst źródła
Streszczenie:
In the dissertation the adaptation problem of the software whose instability is caused by the changes in primary data contents and structure as well as the algorithms for applied problems implementing solutions to problems of applied nature is examined. The solution to the problem is based on the methodology of adapting models for the data expressed as relational sets.
Disertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.
Style APA, Harvard, Vancouver, ISO itp.
35

Farewell, Daniel Mark. "Linear models for censored data". Thesis, Lancaster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441117.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Woodgate, Rebecca A. "Data assimilation in ocean models". Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359566.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Moore, A. M. "Data assimilation in ocean models". Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375276.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Louzada-Neto, Francisco. "Hazard models for lifetime data". Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268248.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Ivan, Thomas R. "Comparison of data integrity models". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/43739.

Pełny tekst źródła
Streszczenie:
Approved for public release; distribution is unlimited
Data integrity in computer based information systems is a concern because of the damage that can be done by unauthorized manipulation or modification of data. While a standard exists for data security, there currently is not an acceptable standard for integrity. There is a need for incorporation of a data integrity policy into the standard concerning data security in order to produce a complete protection policy. There are several existing models which address data integrity. The Biba, Goguen and Meseguer, and Clark/Wilson data integrity models each offer a definition of data integrity and introduce their own mechanisms for preserving integrity. Acceptance of one of these models as a standard for data integrity will create a complete protection policy which addresses both security and integrity.
Style APA, Harvard, Vancouver, ISO itp.
40

Granstedt, Jason Louis. "Data Augmentation with Seq2Seq Models". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78315.

Pełny tekst źródła
Streszczenie:
Paraphrase sparsity is an issue that complicates the training process of question answering systems: syntactically diverse but semantically equivalent sentences can have significant disparities in predicted output probabilities. We propose a method for generating an augmented paraphrase corpus for the visual question answering system to make it more robust to paraphrases. This corpus is generated by concatenating two sequence to sequence models. In order to generate diverse paraphrases, we sample the neural network using diverse beam search. We evaluate the results on the standard VQA validation set. Our approach results in a significantly expanded training dataset and vocabulary size, but has slightly worse performance when tested on the validation split. Although not as fruitful as we had hoped, our work highlights additional avenues for investigation into selecting more optimal model parameters and the development of a more sophisticated paraphrase filtering algorithm. The primary contribution of this work is the demonstration that decent paraphrases can be generated from sequence to sequence models and the development of a pipeline for developing an augmented dataset.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
41

Khatiwada, Aastha. "Multilevel Models for Longitudinal Data". Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3090.

Pełny tekst źródła
Streszczenie:
Longitudinal data arise when individuals are measured several times during an ob- servation period and thus the data for each individual are not independent. There are several ways of analyzing longitudinal data when different treatments are com- pared. Multilevel models are used to analyze data that are clustered in some way. In this work, multilevel models are used to analyze longitudinal data from a case study. Results from other more commonly used methods are compared to multilevel models. Also, comparison in output between two software, SAS and R, is done. Finally a method consisting of fitting individual models for each individual and then doing ANOVA type analysis on the estimated parameters of the individual models is proposed and its power for different sample sizes and effect sizes is studied by simulation.
Style APA, Harvard, Vancouver, ISO itp.
42

Rolfe, Margaret Irene. "Bayesian models for longitudinal data". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/34435/1/Margaret_Rolfe_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.
Style APA, Harvard, Vancouver, ISO itp.
43

Pulgatti, Leandro Duarte. "Data migration between different data models of NOSQL databases". reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/49087.

Pełny tekst źródła
Streszczenie:
Orientador : Marcos Didonet Del Fabro
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 17/02/2017
Inclui referências : f. 76-79
Resumo: Desde sua origem, as bases de dados Nosql têm alcançado um uso generalizado. Devido à falta de padrões de desenvolvimento nesta nova tecnologia emergem grandes desafios. Existem modelos de dados , linguagens de acesso e frameworks heterogêneos, o que torna a migração de dados ainda mais complexa. A maior parte das soluções disponíveis hoje se concentra em fornecer uma representação abstrata e genérica para todos os modelos de dados. Essas soluções se concentram em adaptadores para acessar homogeneamente os dados, mas não para implementar especificamente transformações entre eles. Essas abordagens muitas vezes precisam de um framework para acessar os dados, o que pode impedir de usá-los em alguns cenários. Entre estes desafios, a migração de dados entre as várias soluções revelou-se particularmente difícil. Esta dissertação propõe a criação de um metamodelo e uma série de regras capazes de auxiliar na tarefa de migração de dados. Os dados podem ser convertidos para vários formatos desejados através de um estado intermediário. Para validar a solução foram realizados vários testes com diversos sistemas e utilizando dados reais disponíveis. Palavras Chave: NoSql Databases. Metamodelo. Migração de Dados.
Abstract: Since its origin the NoSql Database have achieved widespread use. Due to the lack of standards for development in this new technology great challenges emerges. Among these challenges, the data migration between the various solutions has proved particularly difficult. There are heterogeneous datamodels, access languages and frameworks available, which makes data migration even more complex. Most part of the solutions available today focus on providing an abstract and generic representation for all data models. These solutions focus in design adapters to homogeneously access the data, but not to specifically implement transformations between them. These approaches often need a framework to access the data, which may prevent from using them in some scenarios. This dissertation proposes the creation of a metamodel and a series of rules capable of assisting in the data migration task. The data can be converted to various desired formats through an intermediate state. To validate the solution several tests were performed with different systems and using real data available. Key-words: NoSql Databases. Metamodel. Data Migration.
Style APA, Harvard, Vancouver, ISO itp.
44

Vemulapalli, Eswar Venkat Ram Prasad 1976. "Architecture for data exchange among partially consistent data models". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84814.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.
Includes bibliographical references (leaves 75-76).
by Eswar Venkat Ram Prasad Vemulapalli.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
45

Hultin, Felix. "Understanding Context-free Grammars through Data Visualization". Thesis, Stockholms universitet, Avdelningen för datorlingvistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130953.

Pełny tekst źródła
Streszczenie:
Ever since the late 1950's, context-free grammars have played an important role within the field of linguistics, been a part of introductory courses and expanded into other fields of study. Meanwhile, data visualization in modern web development has made it possible to do feature rich visualization in the browser. In this thesis, these two developments are united, by developing a browser based app, to write context-free grammars, parse sentences and visualize the output. A user experience study with usability-tests and user-interviews is conducted, in order to investigate the possible benefits and disadvantages of said visualization when writing context-free grammars. The results show that data visualization was limitedly used by participants, in that it helped them to see if sentences were parsed and, if a sentence was not parsed, at which position parsing went wrong. Future improvements on the software and studies on them are proposed as well as the expansion of the field of data visualization within linguistics.
Ända sedan det sena 1950-talet har kontextfria grammatiker spelat en viktig roll hos lingvistiska teorier, används i introduktionskurser och expanderats till andra forskningsfält. Samtidigt har datavisualisering inom modern webbutveckling gjort det möjligt att skapa innehållsrik visualisering i webbläsaren. I detta examensarbete förenas dessa två utvecklingar genom utvecklandet av en webbapplikation, gjord för att skriva kontextfria grammatiker, parsa meningar och visualisera utdatan. En användarbarhetsstudie utförs, bestående av användartest och användaintervjuer, för att undersöka möjliga fördelar och nackdelar av visualisering i skrivandet av kontextfria grammatiker. Resultaten visar att data visualisering användes på ett begränsat sätt av deltagarna, i den meningen att det hjälpte dem att se om satser kan parsas och, om en sats inte blir parsad, se på vilket stället parsning misslyckades. Framtida förbättringar av applikationen och studier av dem föreslås samt en utbyggnad av data visualisering inom lingvistik.
Style APA, Harvard, Vancouver, ISO itp.
46

Jarvis, Stuart. "Optimising, understanding and quantifying Itrax XRF data". Thesis, University of Southampton, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580571.

Pełny tekst źródła
Streszczenie:
The Itrax core scanner provides rapid, high resolution, non-destructive sediment core analysis using x-ray fluorescence (XRF) spectrometry and x-radiography. The effect of varying instrument settings are explored. Effects of sample properties on XRF data are tested and the uncertainty in XRF data is considered. Existing methods of quantifying XRF data are evaluated. Comparisons are made with other XRF micro-scanners. Finally, the x-radiographic capability of the Itrax core scanner is compared to x-ray computed tomography. Itrax XRF data is generally optimised by use of a 30kV x-ray tube voltage. Current should be set as high as possible without generation of sum peaks (30mA is often a good value). A chromium anode tube is suitable for use with most samples. Water content has a diluting effect on detected peak areas, but it is shown that the effect can be corrected, removing an obstacle to quantification of Itrax data. Water content can be determined non-destructively from the ratio of incoherent to coherent scatter of characteristic radiation from the x-ray tube anode. Surface slope can change recorded peak areas, but a simple model is developed to correct for this effect. Surface roughness increases variability in data and, if the scale of roughness is similar to the beam size, elemental peak areas may be reduced. The presence of a mixture of grain sizes greatly reduces peak areas for elements in the larger grains. The uncertainty in Itrax data is found to be higher than suggested by the conventional estimate that uncertainty is equal to the square root of the peak area. This information is vital for researchers to decide what significance they should attach to variations in Itrax elemental profiles. Quantification methods for core scanner XRF data are compared and an approach using log-ratio transformations determined to be the best. Additionally, an improved entirely non-destructive, quantification approach is presented in which explicit corrections are made for the diluting effect of water (water content may determined from the ratio of incoherent and coherent scatter of the tube anode characteristic radiation) . Compared to similar instruments, the Itrax core scanner is more tolerant of surface imperfections. Its x-radiographic scanning helps to determine the significance and extent of features revealed in XRF data. Itrax x-radiography provides no match for the level of detail that can be obtained using x-ray computed tomography and is not readily quantified. It does however provide information on features below the sample surface and, in masking small variations, can make the main core features more apparent. Users of the Itrax core scanner are provided with quantification of known effects (water content, surface slope; x-ray tube, current and voltage) and are alerted to issues that were not previously widely known (mixing of grain sizes, size of uncertainty in data). The areas of effective use and limitations of the Itrax core scanner are set out and recommendations made for optimising results. An optimal quantification method is identified. Many of the conclusions may have relevance to other XRF core scanners.
Style APA, Harvard, Vancouver, ISO itp.
47

Harvey, William John. "Understanding High-Dimensional Data Using Reeb Graphs". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342614959.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Brau, Avila Ernesto. "Bayesian Data Association for Temporal Scene Understanding". Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/312653.

Pełny tekst źródła
Streszczenie:
Understanding the content of a video sequence is not a particularly difficult problem for humans. We can easily identify objects, such as people, and track their position and pose within the 3D world. A computer system that could understand the world through videos would be extremely beneficial in applications such as surveillance, robotics, biology. Despite significant advances in areas like tracking and, more recently, 3D static scene understanding, such a vision system does not yet exist. In this work, I present progress on this problem, restricted to videos of objects that move in smoothly and which are relatively easily detected, such as people. Our goal is to identify all the moving objects in the scene and track their physical state (e.g., their 3D position or pose) in the world throughout the video. We develop a Bayesian generative model of a temporal scene, where we separately model data association, the 3D scene and imaging system, and the likelihood function. Under this model, the video data is the result of capturing the scene with the imaging system, and noisily detecting video features. This formulation is very general, and can be used to model a wide variety of scenarios, including videos of people walking, and time-lapse images of pollen tubes growing in vitro. Importantly, we model the scene in world coordinates and units, as opposed to pixels, allowing us to reason about the world in a natural way, e.g., explaining occlusion and perspective distortion. We use Gaussian processes to model motion, and propose that it is a general and effective way to characterize smooth, but otherwise arbitrary, trajectories. We perform inference using MCMC sampling, where we fit our model of the temporal scene to data extracted from the videos. We address the problem of variable dimensionality by estimating data association and integrating out all scene variables. Our experiments show our approach is competitive, producing results which are comparable to state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
49

Zama, Ramirez Pierluigi <1992&gt. "Deep Scene Understanding with Limited Training Data". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9815/1/zamaramirez_pierluigi_tesi.pdf.

Pełny tekst źródła
Streszczenie:
Scene understanding by a machine is a challenging task due to the profound variety of nature. Nevertheless, deep learning achieves impressive results in several scene understanding tasks such as semantic segmentation, depth estimation, or optical flow. However, these kinds of approaches need a large amount of labeled data, leading to massive manual annotations, which are incredibly tedious and expensive to collect. In this thesis, we will focus on understanding a scene through deep learning with limited data availability. First of all, we will tackle the problem of the lack of data for semantic segmentation. We will show that computer graphics come in handy to our purpose, both to create a new, efficient tool for annotation as well to render synthetic annotated datasets quickly. However, a network trained only on synthetic data suffers from the so-called domain-shift problem, i.e. unable to generalize to real data. Thus, we will show that we can mitigate this problem using a novel deep image to image translation technique. In the second part of the thesis, we will focus on the relationship between scene understanding tasks. We argue that building a model aware of the connections between tasks is the first building stone to create more robust, efficient, performant models that need less annotated training data. In particular, we demonstrate that we can decrease the need for labels by exploiting the relationship between visual tasks. Finally, in the last part, we propose a novel unified framework for comprehensive scene understanding, which exploits the synergies between tasks to be more robust, efficient, and performant.
Style APA, Harvard, Vancouver, ISO itp.
50

Hull, Lynette. "FRACTION MODELS THAT PROMOTE UNDERSTANDING FOR ELEMENTARY STUDENTS". Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3127.

Pełny tekst źródła
Streszczenie:
This study examined the use of the set, area, and linear models of fraction representation to enhance elementary students' conceptual understanding of fractions. Students' preferences regarding the set, area, and linear models of fractions during independent work was also investigated. This study took place in a 5th grade class consisting of 21 students in a suburban public elementary school. Students participated in classroom activities which required them to use manipulatives to represent fractions using the set, area, and linear models. Students also had experiences using the models to investigate equivalent fractions, compare fractions, and perform operations. Students maintained journals throughout the study, completed a pre and post assessment, participated in class discussions, and participated in individual interviews concerning their fraction model preference. Analysis of the data revealed an increase in conceptual understanding. The data concerning student preferences were inconsistent, as students' choices during independent work did not always reflect the preferences indicated in the interviews.
M.A.
Department of Teaching and Learning Principles
Education
K-8 Mathematics and Science Education
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii