Dissertationen zum Thema „Understanding of data models“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Understanding of data models" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Sommeria-Klein, Guilhem. „From models to data : understanding biodiversity patterns from environmental DNA data“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30390/document.
Der volle Inhalt der QuelleIntegrative patterns of biodiversity, such as the distribution of taxa abundances and the spatial turnover of taxonomic composition, have been under scrutiny from ecologists for a long time, as they offer insight into the general rules governing the assembly of organisms into ecological communities. Thank to recent progress in high-throughput DNA sequencing, these patterns can now be measured in a fast and standardized fashion through the sequencing of DNA sampled from the environment (e.g. soil or water), instead of relying on tedious fieldwork and rare naturalist expertise. They can also be measured for the whole tree of life, including the vast and previously unexplored diversity of microorganisms. Taking full advantage of this new type of data is challenging however: DNA-based surveys are indirect, and suffer as such from many potential biases; they also produce large and complex datasets compared to classical censuses. The first goal of this thesis is to investigate how statistical tools and models classically used in ecology or coming from other fields can be adapted to DNA-based data so as to better understand the assembly of ecological communities. The second goal is to apply these approaches to soil DNA data from the Amazonian forest, the Earth's most diverse land ecosystem. Two broad types of mechanisms are classically invoked to explain the assembly of ecological communities: 'neutral' processes, i.e. the random birth, death and dispersal of organisms, and 'niche' processes, i.e. the interaction of the organisms with their environment and with each other according to their phenotype. Disentangling the relative importance of these two types of mechanisms in shaping taxonomic composition is a key ecological question, with many implications from estimating global diversity to conservation issues. In the first chapter, this question is addressed across the tree of life by applying the classical analytic tools of community ecology to soil DNA samples collected from various forest plots in French Guiana. The second chapter focuses on the neutral aspect of community assembly.[...]
Kivinen, Jyri Juhani. „Statistical models for natural scene data“. Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8879.
Der volle Inhalt der QuelleSteinberg, Daniel. „An Unsupervised Approach to Modelling Visual Data“. Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9415.
Der volle Inhalt der QuelleDas, Debasish. „Bayesian Sparse Regression with Application to Data-driven Understanding of Climate“. Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/313587.
Der volle Inhalt der QuellePh.D.
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables.
Temple University--Theses
LaMar, Michelle Marie. „Models for understanding student thinking using data from complex computerized science tasks“. Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3686374.
Der volle Inhalt der QuelleThe Next Generation Science Standards (NGSS Lead States, 2013) define performance targets which will require assessment tasks that can integrate discipline knowledge and cross-cutting ideas with the practices of science. Complex computerized tasks will likely play a large role in assessing these standards, but many questions remain about how best to make use of such tasks within a psychometric framework (National Research Council, 2014). This dissertation explores the use of a more extensive cognitive modeling approach, driven by the extra information contained in action data collected while students interact with complex computerized tasks. Three separate papers are included. In Chapter 2, a mixture IRT model is presented that simultaneously classifies student understanding of a task while measuring student ability within their class. The model is based on differentially scoring the subtask action data from a complex performance. Simulation studies show that both class membership and class-specific ability can be reasonably estimated given sufficient numbers of items and response alternatives. The model is then applied to empirical data from a food-web task, providing some evidence of feasibility and validity. Chapter 3 explores the potential of using a more complex cognitive model for assessment purposes. Borrowing from the cognitive science domain, student decisions within a strategic task are modeled with a Markov decision process. Psychometric properties of the model are explored and simulation studies report on parameter recovery within the context of a simple strategy game. In Chapter 4 the Markov decision process (MDP) measurement model is then applied to an educational game to explore the practical benefits and difficulties of using such a model with real world data. Estimates from the MDP model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.
Maloo, Akshay. „Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25296.
Der volle Inhalt der QuelleMaster of Science
Izumi, Kenji. „Application of Paleoenvironmental Data for Testing Climate Models and Understanding Past and Future Climate Variations“. Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18510.
Der volle Inhalt der QuelleLipecki, Johan, und Viggo Lundén. „The Effect of Data Quantity on Dialog System Input Classification Models“. Thesis, KTH, Hälsoinformatik och logistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237282.
Der volle Inhalt der QuelleDetta arbete undersöker hur olika datamängder påverkar olika slags ordvektormodeller för klassificering av indata till dialogsystem. Hypotesen att det finns ett tröskelvärde för träningsdatamängden där täta ordvektormodeller när den högsta moderna utvecklingsnivån samt att n-gram-ordvektor-klassificerare med bokstavs-noggrannhet lämpar sig särskilt väl för svenska klassificerare söks bevisas med stöd i att sammansättningar är särskilt produktiva i svenskan och att bokstavs-noggrannhet i modellerna gör att tidigare osedda ord kan klassificeras. Dessutom utvärderas hypotesen att klassificerare som tränas med enkla påståenden är bättre lämpade att klassificera indata i chattkonversationer än klassificerare som tränats med hela chattkonversationer. Resultaten stödjer ingendera hypotes utan visar istället att glesa vektormodeller presterar väldigt väl i de genomförda klassificeringstesterna. Utöver detta visar resultaten att datamängden 799 544 ord inte räcker till för att träna täta ordvektormodeller väl men att konversationer räcker gott och väl för att träna modeller för klassificering av frågor och påståenden i chattkonversationer, detta eftersom de modeller som tränats med användarindata, påstående för påstående, snarare än hela chattkonversationer, inte resulterar i bättre klassificerare för chattpåståenden.
Abufouda, Mohammed [Verfasser], und Katharina [Akademischer Betreuer] Zweig. „Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics / Mohammed Abufouda ; Betreuer: Katharina Zweig“. Kaiserslautern : Technische Universität Kaiserslautern, 2020. http://d-nb.info/1221599747/34.
Der volle Inhalt der QuelleWojatzki, Michael Maximilian [Verfasser], und Torsten [Akademischer Betreuer] Zesch. „Computer-assisted understanding of stance in social media : formalizations, data creation, and prediction models / Michael Maximilian Wojatzki ; Betreuer: Torsten Zesch“. Duisburg, 2019. http://d-nb.info/1177681471/34.
Der volle Inhalt der QuelleLee, Xing Ju. „Statistical and simulation modelling for enhanced understanding of hospital pathogen and related health issues“. Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103762/1/Xing%20Ju_Lee_Thesis.pdf.
Der volle Inhalt der QuelleGreen, Daniel. „Understanding urban rainfall-runoff responses using physical and numerical modelling approaches“. Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33530.
Der volle Inhalt der QuelleAlfadda, Dalal Abdulaziz. „How Does a ‘Model of Graphics’ Approach and Peer Tutoring Lead to Deep Understanding of Data Visualisation?“ Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27203.
Der volle Inhalt der QuelleCowburn, I. Malcolm, Victoria J. Lavis und Tammi Walker. „BME sex offenders in prison: the problem of participation in offending behaviour groupwork programmes: a tripartite model of understanding“. De Montfort University and Sheffield Hallam University, 2008. http://hdl.handle.net/10454/2550.
Der volle Inhalt der QuelleWahi, Rabbani Rash-ha. „Towards an understanding of the factors associated with severe injuries to cyclists in crashes with motor vehicles“. Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121426/1/Rabbani%20Rash-Ha_Wahi_Thesis.pdf.
Der volle Inhalt der QuelleHolm, Henrik. „Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301313.
Der volle Inhalt der QuelleDubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
Cox, Katrina M. „Understanding Brigham Young University's Technology Teacher Education Program's Sucess in Attracting and Retaining Female Students“. Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1416.pdf.
Der volle Inhalt der QuelleEnqvist, Juulia. „Developing an understanding of users through an insights generation model : How insights about users can be generated from a variety of sources available in an organization“. Thesis, Uppsala universitet, Institutionen för informatik och media, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331969.
Der volle Inhalt der QuelleBeillevaire, Marc. „Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model : How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229667.
Der volle Inhalt der QuelleMaskininlärningsmodellerna blir mer och mer kraftfulla och noggranna, men deras goda förutsägelser kommer ofta med en hög komplexitet. Beroende på situationen kan en sådan brist på tolkning vara ett viktigt och blockerande problem. Särskilt är det fallet när man behöver kunna lita på användarsidan för att fatta ett beslut baserat på modellprediktionen. Till exempel, ett försäkringsbolag kan använda en maskininlärningsalgoritm för att upptäcka bedrägerier, men företaget vill vara säkert på att modellen är baserad på meningsfulla variabler innan man faktiskt vidtar åtgärder och undersöker en viss individ. I denna avhandling beskrivs och förklaras flera förklaringsmetoder, på många dataset av typerna textdata och numeriska data, på klassificerings- och regressionsproblem.
O'Shea, Philip James. „Stochastic models for speech understanding“. Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337961.
Der volle Inhalt der QuelleDouzon, Thibault. „Language models for document understanding“. Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Der volle Inhalt der QuelleEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Satkin, Scott. „Data-Driven Geometric Scene Understanding“. Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/280.
Der volle Inhalt der QuelleAlsheikh, Sami Thabet. „Automated understanding of data visualizations“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112830.
Der volle Inhalt der QuelleThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-65).
When a person views a data visualization (graph, chart, infographic, etc.), they read the text and process the images to quickly understand the communicated message. This research works toward emulating this ability in computers. In pursuing this goal, we have explored three primary research objectives: 1) extracting and ranking the most relevant keywords in a data visualization 2) predicting a sensible topic and multiple subtopics for a data visualization, and 3) extracting relevant pictographs from a data visualization. For the first task, we create an automatic text extraction and ranking system which we evaluate on 202 MASSVIS data visualizations. For the last two objectives, we curate a more diverse and complex dataset, Visually. We devise a computational approach that automatically outputs textual and visual elements predicted representative of the data visualization content. Concretely, from the curated Visually dataset of 29K large infographic images sampled across 26 categories and 391 tags, we present an automated two step approach: first, we use extracted text to predict the text tags indicative of the infographic content, and second, we use these predicted text tags to localize the most diagnostic visual elements (what we have called "visual tags"). We report performances on a categorization and multi-label tag prediction problem and compare the results to human annotations. Our results show promise for automated human-like understanding of data visualizations.
by Sami Thabet Alsheikh.
M. Eng.
David-Rus, Richard. „Explanation and Understanding through Scientific Models“. Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-111655.
Der volle Inhalt der QuelleLadicky, Lubor. „Global structured models towards scene understanding“. Thesis, Oxford Brookes University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543818.
Der volle Inhalt der QuelleSegal, Aleksandr V. „Iterative Local Model Selection for tracking and mapping“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:8690e0e0-33c5-403e-afdf-e5538e5d304f.
Der volle Inhalt der QuelleWachsmuth, Sven. „Multi-modal scene understanding using probabilistic models“. [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963782053.
Der volle Inhalt der QuelleMARTINI, MASSIMO. „Deep Learning based models for Space Understanding“. Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295461.
Der volle Inhalt der QuelleThe understanding of a space has always been of great interest to the scientific community, as it is widely used in various fields. The aim of this research is to improve state of the art regarding all the aspects for Space Undertanding, both static and dynamic. Initially, a new deep learning method for Point Cloud Semantic Segmentation of a Space is described. This approach uses additional discriminative features, compared to the state of the art. Experiments were carried out both in cultural heritage field and indoor scenes. Then, generative approaches are proposed as data augmentation technique. The reliability of the methods was evaluated on a novel dataset and the results obtained showed that the methods outperformed the state of the art. The proposed DGCNN-Mod increases the accuracy on the 2 test scenes of ArCH dataset by 26.86% and 4.37%, compared to DGCNN. Handcrafted features allow to achieve 28.44% and 6.21%. Then, a mixed methodology between ML and DL approaches is proposed for the Change Detection task on a dynamic scene. It exploits the extraction of visual and textual features, coming from the acquisition of RGB images of a retail environment. Their union allows to train a final classifier that gives an overall result about the state of a space. The reliability of the proposed methods was investigated using a novel dataset and studying the behaviour of consumers. Finally, an algorithm for Person Re-Identification using RGB-D videos, with a top-view configuration, has been described. It is designed to work both in closed and open world environment. It also integrates visual, spatial and temporal features. This method has been validated by acquiring two new datasets of RGB-D video, for both a retail and a museum environment. Results obtained showed that the methods outperformed the state of the art approaches. In fact, the proposed TVOW improves the accuracy on the new TVPR2 dataset by 2.72% and the accuracy on the TVPR dataset by 0.45%.
Liu, Jiali. „Data expression : understanding and supporting alternatives in data analysis processes“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT022.
Der volle Inhalt der QuelleTo make sense of data, analysts consider different kinds of alternatives: they explore diverse sets of hypotheses, try out different types of methods, and experiment with a broad space of possible solutions. These alternatives influence each other within a dynamic and complex sensemaking process. Current analytic tools, however, rarely consider such alternatives as an integral part of the analysis, making the process cumbersome and cognitively demanding. Applying various empirical methods and tool designs, we address the following questions: (1) What are alternatives and how do they fit within the sensemaking process?And (2) how can tools better support the exploration and management of alternatives? This dissertation contains three parts: Part I explores the role of alternatives through interviews and observations with analysts. Based on the results and our analysis, we contribute characterisations of alternatives and a framework to help describe and reason about them. Part II focuses on supporting alternatives in the context of affinity diagramming for qualitative data analysis. Through interviews with practitioners and combined with our own experience, we propose a design space to characterise the various kinds of alternatives engaged in such sensemaking process.We further provide a vision and proof-of-concept system, ADQDA, to show how analysts can fluidly transition between alternative analysis phases, methods, representations, and how they can flexibly appropriate various devices to suit for the tasks at hand or to extend the analysis space. Part III discusses alternatives in the context of reuse. We envision a novel reuse technique, ”computational transclusion”, which maintains various dynamic links between the original and the reused contents (the alternatives) to facilitate tracking and coordinating changes.We built a sandbox system to probe into different reuse scenarios and explore the various links between alternatives and their possible reifications in notebook-ish user interface
Modur, Sharada P. „Missing Data Methods for Clustered Longitudinal Data“. The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1274876785.
Der volle Inhalt der QuelleBorgelt, Christian. „Data mining with graphical models“. [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=962912107.
Der volle Inhalt der QuelleOsuna, Echavarría Leyre Estíbaliz. „Semiparametric Bayesian Count Data Models“. Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25573.
Der volle Inhalt der QuelleSanz-Alonso, Daniel. „Assimilating data into mathematical models“. Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/83231/.
Der volle Inhalt der QuellePliuskuvienė, Birutė. „Adaptive data models in design“. Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_143940-41525.
Der volle Inhalt der QuelleDisertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.
Farewell, Daniel Mark. „Linear models for censored data“. Thesis, Lancaster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441117.
Der volle Inhalt der QuelleWoodgate, Rebecca A. „Data assimilation in ocean models“. Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359566.
Der volle Inhalt der QuelleMoore, A. M. „Data assimilation in ocean models“. Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375276.
Der volle Inhalt der QuelleLouzada-Neto, Francisco. „Hazard models for lifetime data“. Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268248.
Der volle Inhalt der QuelleIvan, Thomas R. „Comparison of data integrity models“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/43739.
Der volle Inhalt der QuelleData integrity in computer based information systems is a concern because of the damage that can be done by unauthorized manipulation or modification of data. While a standard exists for data security, there currently is not an acceptable standard for integrity. There is a need for incorporation of a data integrity policy into the standard concerning data security in order to produce a complete protection policy. There are several existing models which address data integrity. The Biba, Goguen and Meseguer, and Clark/Wilson data integrity models each offer a definition of data integrity and introduce their own mechanisms for preserving integrity. Acceptance of one of these models as a standard for data integrity will create a complete protection policy which addresses both security and integrity.
Granstedt, Jason Louis. „Data Augmentation with Seq2Seq Models“. Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78315.
Der volle Inhalt der QuelleMaster of Science
Khatiwada, Aastha. „Multilevel Models for Longitudinal Data“. Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3090.
Der volle Inhalt der QuelleRolfe, Margaret Irene. „Bayesian models for longitudinal data“. Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/34435/1/Margaret_Rolfe_Thesis.pdf.
Der volle Inhalt der QuellePulgatti, Leandro Duarte. „Data migration between different data models of NOSQL databases“. reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/49087.
Der volle Inhalt der QuelleDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 17/02/2017
Inclui referências : f. 76-79
Resumo: Desde sua origem, as bases de dados Nosql têm alcançado um uso generalizado. Devido à falta de padrões de desenvolvimento nesta nova tecnologia emergem grandes desafios. Existem modelos de dados , linguagens de acesso e frameworks heterogêneos, o que torna a migração de dados ainda mais complexa. A maior parte das soluções disponíveis hoje se concentra em fornecer uma representação abstrata e genérica para todos os modelos de dados. Essas soluções se concentram em adaptadores para acessar homogeneamente os dados, mas não para implementar especificamente transformações entre eles. Essas abordagens muitas vezes precisam de um framework para acessar os dados, o que pode impedir de usá-los em alguns cenários. Entre estes desafios, a migração de dados entre as várias soluções revelou-se particularmente difícil. Esta dissertação propõe a criação de um metamodelo e uma série de regras capazes de auxiliar na tarefa de migração de dados. Os dados podem ser convertidos para vários formatos desejados através de um estado intermediário. Para validar a solução foram realizados vários testes com diversos sistemas e utilizando dados reais disponíveis. Palavras Chave: NoSql Databases. Metamodelo. Migração de Dados.
Abstract: Since its origin the NoSql Database have achieved widespread use. Due to the lack of standards for development in this new technology great challenges emerges. Among these challenges, the data migration between the various solutions has proved particularly difficult. There are heterogeneous datamodels, access languages and frameworks available, which makes data migration even more complex. Most part of the solutions available today focus on providing an abstract and generic representation for all data models. These solutions focus in design adapters to homogeneously access the data, but not to specifically implement transformations between them. These approaches often need a framework to access the data, which may prevent from using them in some scenarios. This dissertation proposes the creation of a metamodel and a series of rules capable of assisting in the data migration task. The data can be converted to various desired formats through an intermediate state. To validate the solution several tests were performed with different systems and using real data available. Key-words: NoSql Databases. Metamodel. Data Migration.
Vemulapalli, Eswar Venkat Ram Prasad 1976. „Architecture for data exchange among partially consistent data models“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84814.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 75-76).
by Eswar Venkat Ram Prasad Vemulapalli.
S.M.
Hultin, Felix. „Understanding Context-free Grammars through Data Visualization“. Thesis, Stockholms universitet, Avdelningen för datorlingvistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130953.
Der volle Inhalt der QuelleÄnda sedan det sena 1950-talet har kontextfria grammatiker spelat en viktig roll hos lingvistiska teorier, används i introduktionskurser och expanderats till andra forskningsfält. Samtidigt har datavisualisering inom modern webbutveckling gjort det möjligt att skapa innehållsrik visualisering i webbläsaren. I detta examensarbete förenas dessa två utvecklingar genom utvecklandet av en webbapplikation, gjord för att skriva kontextfria grammatiker, parsa meningar och visualisera utdatan. En användarbarhetsstudie utförs, bestående av användartest och användaintervjuer, för att undersöka möjliga fördelar och nackdelar av visualisering i skrivandet av kontextfria grammatiker. Resultaten visar att data visualisering användes på ett begränsat sätt av deltagarna, i den meningen att det hjälpte dem att se om satser kan parsas och, om en sats inte blir parsad, se på vilket stället parsning misslyckades. Framtida förbättringar av applikationen och studier av dem föreslås samt en utbyggnad av data visualisering inom lingvistik.
Jarvis, Stuart. „Optimising, understanding and quantifying Itrax XRF data“. Thesis, University of Southampton, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580571.
Der volle Inhalt der QuelleHarvey, William John. „Understanding High-Dimensional Data Using Reeb Graphs“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342614959.
Der volle Inhalt der QuelleBrau, Avila Ernesto. „Bayesian Data Association for Temporal Scene Understanding“. Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/312653.
Der volle Inhalt der QuelleZama, Ramirez Pierluigi <1992>. „Deep Scene Understanding with Limited Training Data“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9815/1/zamaramirez_pierluigi_tesi.pdf.
Der volle Inhalt der QuelleHull, Lynette. „FRACTION MODELS THAT PROMOTE UNDERSTANDING FOR ELEMENTARY STUDENTS“. Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3127.
Der volle Inhalt der QuelleM.A.
Department of Teaching and Learning Principles
Education
K-8 Mathematics and Science Education