Teses / dissertações sobre o tema "Creation of data models"

Siga este link para ver outros tipos de publicações sobre o tema: Creation of data models.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Creation of data models".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Wojatzki, Michael Maximilian [Verfasser], e Torsten [Akademischer Betreuer] Zesch. "Computer-assisted understanding of stance in social media : formalizations, data creation, and prediction models / Michael Maximilian Wojatzki ; Betreuer: Torsten Zesch". Duisburg, 2019. http://d-nb.info/1177681471/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stienmetz, Jason Lee. "Foundations for a Network Model of Destination Value Creation". Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/419874.

Texto completo da fonte
Resumo:
Tourism and Sport
Ph.D.
Previous research has demonstrated that a network model of destination value creation (i.e. the Destination Value System model) based on the flows of travelers within a destination can be used to estimate and predict individual attractions’ marginal contributions to total visitor expenditures. While development to date of the Destination Value System (DVS) has focused on the value created from dyadic relationships within the destination network, previous research supports the proposition that system-level network structures significantly influence the total value created within a destination. This study, therefore, builds upon previous DVS research in order to determine the relationships between system-level network structures and total value creation within a destination. To answer this question econometric analysis of panel data covering 43 Florida destinations over the period from 2007 to 2015 was conducted. The panel data was created utilizing volunteered geographic information (VGI) obtained from 4.6 million photographs shared on Flickr. Results of econometric analysis indicate that both seasonal effects and DVS network structures have statistically significant relationships with total tourism-related sales within a destination. Specifically, network density, network out-degree centralization, and network global clustering coefficient are found to have negative and statistically significant effects on destination value creation, while network in-degree centralization, network betweenness centralization, and network subcommunity count are found to have positive and statistically significant effects. Quarterly seasonality is also found to have dynamic and statistically significant effects on total tourism-related sales within a destination. Based on the network structures of destinations and total tourism related sales within destinations, this study also uses k-means cluster analysis to classify tourism destinations into a taxonomy of six different system types (Exploration, Involvement, Development I, Development II, Consolidation, and Stars). This taxonomy of DVS types is found to correspond to Butler’s (1980) conceptualization of the destination life cycle, and additional data visualization and exploration based on the DVS taxonomy finds distinct characteristics in destination structure, dynamics, evolution, and performance that may be useful for benchmarking. Additionally, this study assesses the quality of VGI data for tourism related research by comparing DVS network structures based on Flickr data and visitor intercept survey data. Support for the use of VGI data is found, provided that thousands of observations are available for analysis. When fewer observations are available, aggregation techniques are recommended in order to improve the quality of overall destination network system quantification. This research makes important contributions to both the academic literature and the practical management of destinations by demonstrating that DVS network structures significantly influence the economic value created within the destination, and thus suggests that a strategic network management approach is needed for the governance of competitive destinations. As a result, this study provides a strong foundation for the DVS model and future research in the areas of destination resiliency, “smarter” destination management, and tourism experience design.
Temple University--Theses
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Khadgi, Vinaya, e Tianyi Wang. "Automatic Creation of Researcher’s Competence Profiles Based on Semantic Integration of Heterogeneous Data sources". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsområde Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21904.

Texto completo da fonte
Resumo:
The research journals and publications are great source of knowledge produced by the virtue of hard work done by researchers. Several digital libraries have been maintaining the records of such research publications in order for general people and other researchers to find and study the previous work done in the research field they are interested in. In order to make the search criteria effective and easier, all of these digital libraries keep a record/database to store the meta-data of the publications. These meta-data records are generally well design to keep the vital records of the publications/articles, which has the potential to give information about the researcher, their research activities, and hence the competence profile. This thesis work is a study and search of method for building the competence profile of researchers’ base on the records of their publications in the well-known digital libraries. The publications of researchers publish in different publication houses, so, in order to make a complete profile, the data from several of these heterogeneous digital libraries sources have to be integrated semantically. Several of the semantic technologies were studied in order to investigate the challenges of integration of the heterogeneous sources and modeling the researchers’ competence profile .An approach of on-demand profile creation was chosen where user of system could enter some basic name detail of the researcher whose profile is to be created. In this thesis work, Design Science Research methodology was used as the method for research work and to complement this research method with a working artifact, scrum- an agile software development methodology was used to develop a competence profile system as proof of concept.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chawinga, Winner Dominic. "Research data management in public universities in Malawi". University of the Western Cape, 2019. http://hdl.handle.net/11394/6951.

Texto completo da fonte
Resumo:
Philosophiae Doctor - PhD
The emergence and subsequent uptake of Information and Communication Technologies has transformed the research processes in universities and research institutions across the globe. One indelible impact of Information and Communication Technologies on the research process is the increased generation of research data in digital format. This study investigated how research data has been generated, organised, shared, stored, preserved, accessed and re-used in Malawian public universities with a view to proposing a framework for research data management in universities in Malawi. The objectives of the study were: to determine research data creation, sharing and re-use practices in public universities in Malawi; to investigate research data preservation practices in public universities in Malawi; to investigate the competencies that librarians and researchers need to effectively manage research data; and to find out the challenges that affect the management of research data in public universities in Malawi. Apart from being guided by the Community Capability Model Framework (Lyon, Ball, Duke & Day, 2011) and Data Curation Centre Lifecycle Model (Higgins, 2008), the study was inspired by the pragmatic school of thought which is the basis for a mixed methods research enabling the collection of quantitative and qualitative data from two purposively selected universities. A census was used to identify researchers and librarians while purposive sampling was used to identify directors of research. Questionnaires were used to collect mostly quantitative and some qualitative data from 36 librarians and 187 researchers while interviews were conducted with directors of research. The Statistical Package for the Social Sciences was used to analyse the quantitative data by producing percentages, means, independent samples ttest and one-way analysis of variance. Thematic analysis was used to analyse the qualitative data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Puerto, Valencia J. (Jose). "Predictive model creation approach using layered subsystems quantified data collection from LTE L2 software system". Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201907192705.

Texto completo da fonte
Resumo:
Abstract. The road-map to a continuous and efficient complex software system’s improvement process has multiple stages and many interrelated on-going transformations, these being direct responses to its always evolving environment. The system’s scalability on this on-going transformations depends, to a great extent, on the prediction of resources consumption, and systematic emergent properties, thus implying, as the systems grow bigger in size and complexity, its predictability decreases in accuracy. A predictive model is used to address the inherent complexity growth and be able to increase the predictability of a complex system’s performance. The model creation processes are driven by the recollection of quantified data from different layers of the Long-term Evolution (LTE) Data-layer (L2) software system. The creation of such a model is possible due to the multiple system analysis tools Nokia has already implemented, allowing a multiple-layers data gathering flow. The process starts by first, stating the system layers differences, second, the use of a layered benchmark approach for the data collection at different levels, third, the design of a process flow organizing the data transformations from recollection, filtering, pre-processing and visualization, and forth, As a proof of concept, different Performance Measurements (PM) predictive models, trained by the collected pre-processed data, are compared. The thesis contains, in parallel to the model creation processes, the exploration, and comparison of various data visualization techniques that addresses the non-trivial graphical representation of the in-between subsystem’s data relations. Finally, the current results of the model process creation process are presented and discussed. The models were able to explain 54% and 67% of the variance in the two test configurations used in the instantiation of the model creation process proposed in this thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Fadul, Waad. "Data-Driven Health Services: an Empirical Investigation on the Role of Artificial Intelligence and Data Network Effects in Value Creation". Thesis, Uppsala universitet, Informationssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447507.

Texto completo da fonte
Resumo:
The purpose of this study is to produce new knowledge concerning the perceived user’s value generated using machine learning technologies that activate data network effects factors that create value through various business model themes. The data network effects theory represents a set of factors that increase the user’s perceived value for a platform that uses artificial intelligence capabilities. The study followed an abductive research approach where initially found facts were matched against the data network effects theory to be put in context and understood. The study’s data was gathered through semi-structured interviews with experts who were active within the research area and chosen based on their practical experience and their role in the digitization of the healthcare sector. The results show that three out of six factors were fully realized contributing to value creation while two of the factors showed to be partially realized in order to contribute to value creation and that is justified by the exclusion of users' perspectives in the scope of the research. Lastly, only one factor has limited contribution to the value creation due to the heavy regulations limiting its realization in the health sector. It is concluded that data network effects moderators contributed differently in the activation of various business model themes for value creation in a general manner where further studies should apply the theory in the assessment of one specific AI health offering to take full advantage of the theory potential. The theoretical implications showed that the data network factors may not necessarily be equally activated to contribute to value creation which was not initially highlighted by the theory. Additionally, the practical implications of the study’s results may help managers in their decision-making process on which factors to be activated for which business model theme.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

TAHERIFARD, ERSHAD. "Open data in Swedish municipalities? : Value creation and innovation in local public sector organizations". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299877.

Texto completo da fonte
Resumo:
Digital transformation is highlighted as a way of solving many of the problems and challenges that the public sector faces in terms of cost developments and increased demands for better public services. Open data is a strategic resource that is necessary for this development to takeplace and the municipal sector collects information that could be published to create value inmany stages. Previous research believes that economic value is generated through new innovative services and productivity gains, but also social values such as increased civic participation and transparency. But despite previous attempts to stimulate open data, Sweden is far behind comparable countries and there is a lack of research that looks at exactly how these economic values should be captured. To investigate why this is the case and what role open datahas in value creation in the municipal sector, this study has identified several themes through qualitative interviews with an inductive approach. The study resulted in a deeper theoretical analysis of open data and its properties. By considering it as a public good, it is possible to use several explanatory models to explain its slow spread and but also understand the difficult conditions for value capture which results in incentive problems. In addition, there are structural problems linked to legislation and infrastructure that hamper the dissemination of open data and its value-creating role in the municipal sector.
Digital transformationen lyfts som ett sätt att lösa många av de problem och utmaningar som den offentliga sektorn står inför gällande kostnadsutveckling och ökade krav på bättre samhällsservice. Öppna data är en sådan strategisk resurs som är nödvändig för att dennautveckling ska ske och kommunsektorn samlar på sig information som skulle kunna publiceras för att skapa värden i många led. Dels menar tidigare forskning att ekonomiska värden kan genereras genom nya innovativa tjänster och produktivitetsökningar, men även sociala värden som ökad medborgardelaktighet och transparens. Men trots tidigare försök att stimulera öppna data, ligger Sverige långt efter jämförbara länder och det saknas forskning som tittar på exakt hur dessa ekonomiska värden ska fångas. För att undersöka varför så är fallet och vilken roll öppna data har på värdeskapande i kommunsektorn har denna studie genom kvalitativa intervjuer med en induktiv ansats identifierat flertalet teman. Studien resulterade i en djupare teoretisk analys av öppna data och dess egenskaper. Genom att betrakta det som en kollektiv vara går det att använda flera förklaringsmodeller för att förklara dess långsamma spridning och förstå de svåra förutsättningarna för värdefångst vilket resulterar i incitamentsproblem. Till det finns det strukturella problem kopplat till lagstiftning och infrastruktur som hämmarspridningen av öppna data och dess värdeskapande roll i kommunsektorn.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Huber, Peter, Harald Oberhofer e Michael Pfaffermayr. "Who Creates Jobs? Econometric Modeling and Evidence for Austrian Firm Level Data". WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4650/1/wp205.pdf.

Texto completo da fonte
Resumo:
This paper offers an empirical analysis of net job creation patterns at the firm level for the Austrian economy between 1993 and 2013 focusing on the impact of firm size and age. We propose a new estimation strategy based on a two-part model. This allows to identify the structural parameters of interest and to decompose behavioral differences between exiting and surviving firms. Our findings suggest that conditional on survival, young Austrian firms experience the largest net job creation rates. Differences in firm size are not able to explain variation in net job creation rates among the group of continuing enterprises. Job destruction induced by market exit, however, is largest among the young and small firms with this effect being even more pronounced during the times of the Great Recession. In order to formulate sensible policy recommendations, a separate treatment of continuing versus exiting firms as proposed by the new two-part model estimation approach seems crucial.(authors' abstract)
Series: Department of Economics Working Paper Series
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Kenjangada, Kariappa Ganapathy, e Marcus Bjersér. "Value as a Motivating Factor for Collaboration : The case of a collaborative network for wind asset owners for potential big data sharing". Thesis, Högskolan i Halmstad, Centrum för innovations-, entreprenörskaps- och lärandeforskning (CIEL), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40699.

Texto completo da fonte
Resumo:
The world's need for energy is increasing while we realize the consequences of existing unsustainable methods for energy production. Wind power is a potential partial solution, but it is a relatively new source of energy. Advances in technology and innovation can be one solution, but the wind energy industry is embracing them too slow due to, among other reasons, lack of incentives in terms of the added value provided. Collaboration and big data may possibly provide a key to overcome this. However, to our knowledge, this research area has received little attention, especially in the context of the wind energy industry.   The purpose of this study is to explore value as a motivating factor for potential big data collaboration via a collaborative network. This will be explored within the context of big data collaboration, and the collaborative network for wind asset owners O2O WIND International. A cross sectional, multi-method qualitative single in-depth case study is conducted. The data collected and analyzed is based on four semi-structured interviews and a set of rich documentary secondary data on the 25 of the participants in the collaborative network in the form of 3866 pages and 124 web pages visited.  The main findings are as follows. The 25 participants of the collaborative network were evaluated and their approach to three different types of value were visualized through a novel model: A three-dimensional value approach space. From this visualization clusters of participants resulting in 6 different approaches to value can be distinguished amongst the 25 participants.  Furthermore, 14 different categories of value as the participants express are possible to create through the collaborative network has been identified. These values have been categorized based on fundamental types of value, their dimensions and four value processes. As well as analyzed for patterns and similarities amongst them. The classification results in a unique categorization of participants of a collaborative network. These categories prove as customer  segments that the focal firm of the collaborative network can target.  The interviews resulted in insights about the current state of the industry, existing and future market problems and needs as well as existing and future market opportunities. Then possible business model implications originating from our findings, for the focal firm behind the collaborative network O2O WIND International as well as the participants of the collaboration, has been discussed. We conclude that big data and collaborative networks has potential for value creation in the wind power sector, if the business model of those involved takes it into account. However, more future research is necessary, and suggestions are made.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Vieira, Fábio Danilo 1977. "Modelos baseados em técnicas de mineração de dados para suporte à certificação racial de ovinos". [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/257128.

Texto completo da fonte
Resumo:
Orientadores: Stanley Robson de Medeiros Oliveira, Samuel Rezende Paiva
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola
Made available in DSpace on 2018-08-26T01:06:59Z (GMT). No. of bitstreams: 1 Vieira_FabioDanilo_M.pdf: 3608471 bytes, checksum: 4705c25d2fbd6794b8aa85559e3620a0 (MD5) Previous issue date: 2014
Resumo: As raças de ovinos localmente adaptadas descendem de animais trazidos durante o período colonial, e durante anos foram submetidas a cruzamentos indiscriminados com raças exóticas. Estas raças de ovinos são consideradas importantes por possuírem características adaptativas às diversas condições ambientais brasileiras. Para evitar a perda deste importante material genético, a Empresa Brasileira de Pesquisa Agropecuária (Embrapa) decidiu incluí-las no seu Programa de Pesquisa em Recursos Genéticos, armazenando-as em seus bancos de germoplasma, sendo que as que possuem maior destaque nacional são as raças Crioula, Morada Nova e Santa Inês. A seleção dos ovinos para compor estes bancos é realizada por meio da avaliação de características morfológicas e produtivas. Entretanto, essa avaliação está sujeita a falhas, pois alguns animais cruzados mantêm características semelhantes àquelas dos animais locais. Desta forma, identificar se os animais depositados nos bancos são ou não pertencentes a uma raça é uma tarefa que exige muita cautela. Em busca de soluções, nos últimos anos houve um aumento significativo no uso de tecnologias que utilizam marcadores moleculares SNP (do inglês Single Nucleotide Polimorphism). No entanto, o grande número de marcadores gerados, que pode chegar a centenas de milhares por animal, torna-se um problema crucial. Para abordar esse problema, o objetivo deste trabalho é desenvolver modelos baseados em técnicas de mineração de dados para selecionar os principais marcadores SNP para as raças Crioula, Morada Nova e Santa Inês. Os dados utilizados neste estudo foram obtidos do Consórcio Internacional de Ovinos e são compostos por 72 animais destas três raças e 49.034 marcadores SNP para cada ovino. O resultado obtido com a conclusão deste trabalho foi um conjunto de modelos preditivos baseados em técnicas de mineração de dados que selecionaram os principais marcadores SNP para identificação das raças estudadas. A partir da intersecção desses modelos identificou-se um subconjunto de 15 marcadores com maior potencial de identificação das raças. Os modelos poderão ser utilizados para certificação das raças de ovinos já depositados nos bancos de germoplasma e de novos animais a serem inclusos, além de subsidiar associações de criadores interessadas em certificar seus animais, bem como o MAPA (Ministério da Agricultura, Pecuária e Abastecimento) no controle de animais registrados. Os modelos gerados poderão ser estendidos para outras espécies animais de produção
Abstract: The locally adapted breeds of sheep are descended from animals brought in during the colonial period, and for years were subjected to indiscriminate crossbreeding with exotic breeds. These breeds of sheep are considered important by having adaptive characteristics to several Brazilian environmental conditions. To avoid the loss of this important genetic material, the Brazilian Agricultural Research Corporation (Embrapa) decided to include them in its Programme of Research in Genetic Resources, storing them in their genebanks, while those with greater national prominence are Creole breeds, Morada Nova and Santa Ines. The selection of sheep to compose these banks is performed through the evaluation of morphological and productive characteristics. However, this assessment is subject to failures, because some crossbred maintains similar characteristics to those of the local animals. Thus, identifying if the animals deposited in banks belong or not to a breed is a challenging task. In search for solutions in recent years there has been a significant increase in the use of technologies that use molecular markers SNP (Single Nucleotide Polimorphism). However, the large number of markers generated, which can reach hundreds of thousands per animal, becomes a crucial issue. To address this problem, the aim of this study is to develop models based on data mining techniques to select the main SNP markers for Creole, Morada Nova and Santa Ines breeds. The data used in this study were obtained from the International Consortium of Sheep and consist of 72 animals e of these three breeds and 49,034 SNP markers for each sheep. The result obtained with this study was a set of predictive models based on data mining techniques to selected major SNP markers to identify the breeds studied. The intersection of the generated models identified a subset of 15 markers, with greater potential for identification of sheep breeds. The models may be used for certification of sheep breeds already deposited in genebanks and new animals to be included, apart from subsidizing breeders associations interested in certifying their animals, as well as MAPA (Ministry of Agriculture, Livestock and Food Supply) in control registered animals. The proposed models can be extended to other species of production animals
Mestrado
Planejamento e Desenvolvimento Rural Sustentável
Mestre em Engenharia Agrícola
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Lindgren, Mona, e Anders Sivertsson. "Visualizing the Body Language of a Musical Conductor using Gaussian Process Latent Variable Models : Creating a visualization tool for GP-LVM modelling of motion capture data and investigating an angle based model for dimensionality reduction". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195692.

Texto completo da fonte
Resumo:
In this bachelors’ thesis we investigate and visualize a Gaussian process latent variable model (GP-LVM), used to model high dimensional motion capture data of a musical conductor in a lower dimensional space. This work expands upon the degree project of K. Karipidou, ”Modelling the body language of a musical conductor using Gaussian Process Latent Variable Models”, in which GP-LVMs are used to perform dimensionality reduction of motion capture data of a conductor conducting a string quartet, expressing four different underlying emotional interpretations (tender, angry, passionate and neutral). In Karipidou’s work, a GP-LVM coupled with K-means and an HMM are used for classification of unseen conduction motions into the aforementioned emotional interpretations. We develop a graphical user interface (GUI) for visualizing the resulting lower dimensional mapping performed by a GP-LVM side by side with the motion capture data. The GUI and the GP-LVM mapping is done within Matlab, while the open source 3D creation suite Blender is used to visualize the motion capture data in greater detail, which is then imported into the GUI. Furthermore, we develop a new GP-LVM in the same manner as Karipidou, but based on the angles between the motion capture nodes, and compare its accuracy in classifying emotion to that of Karipidou’s location based model. The evaluation of the GUI concludes that it is a very useful tool when a GP-LVM is to be examined and evaluated. However, our angle-based model does not improve the classification result compared to Karipidou’s position-based. Thus, using Euler angles are deemed inappropriate for this application. Keywords: Gaussian process latent variable model, motion capture, visualization, body language, musical conductor, euler angles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Chang, Kerry Shih-Ping. "A Spreadsheet Model for Using Web Services and Creating Data-Driven Applications". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/769.

Texto completo da fonte
Resumo:
Web services have made many kinds of data and computing services available. However, to use web services often requires significant programming efforts and thus limits the people who can take advantage of them to only a small group of skilled programmers. In this dissertation, I will present a tool called Gneiss that extends the spreadsheet model to support four challenging aspects of using web services: programming two-way data communications with web services, creating interactive GUI applications that use web data sources, using hierarchical data, and using live streaming data. Gneiss contributes innovations in spreadsheet languages, spreadsheet user interfaces and interaction techniques to allow programming tasks that currently require writing complex, lengthy code to instead be done using familiar spreadsheet mechanisms. Spreadsheets are arguably the most successful and popular data tools among people of all programming levels. This work advances the use of spreadsheets to new domains and could benefit a wide range of users from professional programmers to end-user programmers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Kupferschmidt, Benjamin. "Bulk Creation of Data Acquisition Parameters". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604250.

Texto completo da fonte
Resumo:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Modern data acquisition systems can be very time consuming to configure. The most time consuming aspect of configuring a data acquisition system is defining the measurements that the system will collect. Each measurement has to be uniquely identified in the system and the system needs to know what data the measurement will sample. Data acquisition systems are capable of sampling thousands of measurements in a single test flight. If all of the measurements are created by hand, it can take many hours to input all of the required measurements into the data acquisition system's setup software. This process can also be extremely tedious since many measurements are very similar. This paper will examine several possible solutions to the problem of rapidly creating large numbers of data acquisition measurements. If the list of measurements that need to be created already exists in an electronic format then the simplest approach would be to create an importer. The two main ways to import data are XML and comma separated value files. This paper will discuss the advantages and disadvantages of both approaches. In addition to importers, this paper will discuss a system that can be used to create large numbers of similar measurements very quickly. This system is ideally suited to MILSTD- 1553 and ARINC-429 bus data. It exploits the fact that most bus measurements are typically very similar to each other. For example, 1553 measurements typically differ only in terms of the command word and the selected data words. This system allows the user to specify ranges of data words for each command word. It can then create the measurements based on the user specified ranges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Robinson, Trevor Thomas. "Automated creation of mixed dimensional finite element models". Thesis, Queen's University Belfast, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.479311.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Riley, Jacqueline. "Creating a generic model of accident and emergency departments". Thesis, Glasgow Caledonian University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364778.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Enoksson, Fredrik. "Adaptable metadata creation for the Web of Data". Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154272.

Texto completo da fonte
Resumo:
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM.

QC 20141028

Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Modur, Sharada P. "Missing Data Methods for Clustered Longitudinal Data". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1274876785.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Gray, James. "Creation and evolution of compactified cosmologies". Thesis, University of Sussex, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390919.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Heyman, Fredrik. "Empirical studies on wages, firm performance and job turnover". Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögsk.] (EFI), 2002. http://www.hhs.se/efi/summary/601.htm.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Borgelt, Christian. "Data mining with graphical models". [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=962912107.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Osuna, Echavarría Leyre Estíbaliz. "Semiparametric Bayesian Count Data Models". Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25573.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Sanz-Alonso, Daniel. "Assimilating data into mathematical models". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/83231/.

Texto completo da fonte
Resumo:
Chapter 1 is a brief overview of the Bayesian approach to blending mathematical models with data. For this introductory chapter, I do not claim any originality in the material itself, but only in the presentation, and in the choice of contents. Chapters 2, 3 and 4 are transcripts of published and submitted papers, with minimal cosmetic modifications. I now detail my contributions to each of these papers. Chapter 2 is a transcript of the published paper Long-time Asymptotics of the Filtering Distribution for Partially Observed Chaotic Dynamical Systems" [Sanz-Alonso and Stuart, 2015] written in collaboration with Andrew Stuart. The idea of building a unified framework for studying filtering of chaotic dissipative dynamical systems is from Andrew. My ideas include the truncation of the 3DVAR algorithm that allows for unbounded observation noise, using the squeezing property as the unifying arch across all models, and most of the links with control theory. I stated and proved all the results of the paper. I also wrote the first version of the paper, which was subsequently much improved with Andrew's input. Chapter 3 is a transcript of the published paper \Filter Accuracy for the Lorenz 96 Model: Fixed Versus Adaptive Observation Operators" [Law et al., 2016], written in collaboration with Kody Law, Abhishek Shukla, and Andrew Stuart. My contribution to this paper was in proving most of the theoretical results. I did not contribute to the numerical experiments. The idea of using adaptive observation operators is from Abhishek. Chapter 4 is a transcript of the submitted paper\Importance Sampling: Computational Complexity and Intrinsic Dimension" [Agapiou et al., 2015], written in collaboration with Sergios Agapiou, Omiros Papaspiliopoulos, and Andrew Stuart. The idea of relating the two notions of intrinsic dimension described in the paper is from Omiros. Sergios stated and proved Theorem 4.2.3. Andrew's input was fundamental in making the paper well structured, and in the overall writing style. The paper was written very collaboratively among the four of us, and some of the results were the fruit of many discussions involving different subsets of authors. Some of my inputs include: the idea of using metrics between probability measures to study the performance of importance sampling, establishing connections to tempering, the analysis of singular limits both for inverse problems and filtering, most of the filtering section and in particular the use of the theory of inverse problems to analyze different proposals in the filtering set-up, the proof of Theorem 4.2.1, and substantial input in the proof of all the results of the paper not mentioned before. This paper aims to bring cohesion and new insights into a topic with a vast literature, and I helped towards this goal by doing most of the literature review involved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Pliuskuvienė, Birutė. "Adaptive data models in design". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_143940-41525.

Texto completo da fonte
Resumo:
In the dissertation the adaptation problem of the software whose instability is caused by the changes in primary data contents and structure as well as the algorithms for applied problems implementing solutions to problems of applied nature is examined. The solution to the problem is based on the methodology of adapting models for the data expressed as relational sets.
Disertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Farewell, Daniel Mark. "Linear models for censored data". Thesis, Lancaster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441117.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Woodgate, Rebecca A. "Data assimilation in ocean models". Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359566.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Moore, A. M. "Data assimilation in ocean models". Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375276.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Louzada-Neto, Francisco. "Hazard models for lifetime data". Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268248.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Ivan, Thomas R. "Comparison of data integrity models". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/43739.

Texto completo da fonte
Resumo:
Approved for public release; distribution is unlimited
Data integrity in computer based information systems is a concern because of the damage that can be done by unauthorized manipulation or modification of data. While a standard exists for data security, there currently is not an acceptable standard for integrity. There is a need for incorporation of a data integrity policy into the standard concerning data security in order to produce a complete protection policy. There are several existing models which address data integrity. The Biba, Goguen and Meseguer, and Clark/Wilson data integrity models each offer a definition of data integrity and introduce their own mechanisms for preserving integrity. Acceptance of one of these models as a standard for data integrity will create a complete protection policy which addresses both security and integrity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Granstedt, Jason Louis. "Data Augmentation with Seq2Seq Models". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78315.

Texto completo da fonte
Resumo:
Paraphrase sparsity is an issue that complicates the training process of question answering systems: syntactically diverse but semantically equivalent sentences can have significant disparities in predicted output probabilities. We propose a method for generating an augmented paraphrase corpus for the visual question answering system to make it more robust to paraphrases. This corpus is generated by concatenating two sequence to sequence models. In order to generate diverse paraphrases, we sample the neural network using diverse beam search. We evaluate the results on the standard VQA validation set. Our approach results in a significantly expanded training dataset and vocabulary size, but has slightly worse performance when tested on the validation split. Although not as fruitful as we had hoped, our work highlights additional avenues for investigation into selecting more optimal model parameters and the development of a more sophisticated paraphrase filtering algorithm. The primary contribution of this work is the demonstration that decent paraphrases can be generated from sequence to sequence models and the development of a pipeline for developing an augmented dataset.
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Khatiwada, Aastha. "Multilevel Models for Longitudinal Data". Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3090.

Texto completo da fonte
Resumo:
Longitudinal data arise when individuals are measured several times during an ob- servation period and thus the data for each individual are not independent. There are several ways of analyzing longitudinal data when different treatments are com- pared. Multilevel models are used to analyze data that are clustered in some way. In this work, multilevel models are used to analyze longitudinal data from a case study. Results from other more commonly used methods are compared to multilevel models. Also, comparison in output between two software, SAS and R, is done. Finally a method consisting of fitting individual models for each individual and then doing ANOVA type analysis on the estimated parameters of the individual models is proposed and its power for different sample sizes and effect sizes is studied by simulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Rolfe, Margaret Irene. "Bayesian models for longitudinal data". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/34435/1/Margaret_Rolfe_Thesis.pdf.

Texto completo da fonte
Resumo:
Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Pulgatti, Leandro Duarte. "Data migration between different data models of NOSQL databases". reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/49087.

Texto completo da fonte
Resumo:
Orientador : Marcos Didonet Del Fabro
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 17/02/2017
Inclui referências : f. 76-79
Resumo: Desde sua origem, as bases de dados Nosql têm alcançado um uso generalizado. Devido à falta de padrões de desenvolvimento nesta nova tecnologia emergem grandes desafios. Existem modelos de dados , linguagens de acesso e frameworks heterogêneos, o que torna a migração de dados ainda mais complexa. A maior parte das soluções disponíveis hoje se concentra em fornecer uma representação abstrata e genérica para todos os modelos de dados. Essas soluções se concentram em adaptadores para acessar homogeneamente os dados, mas não para implementar especificamente transformações entre eles. Essas abordagens muitas vezes precisam de um framework para acessar os dados, o que pode impedir de usá-los em alguns cenários. Entre estes desafios, a migração de dados entre as várias soluções revelou-se particularmente difícil. Esta dissertação propõe a criação de um metamodelo e uma série de regras capazes de auxiliar na tarefa de migração de dados. Os dados podem ser convertidos para vários formatos desejados através de um estado intermediário. Para validar a solução foram realizados vários testes com diversos sistemas e utilizando dados reais disponíveis. Palavras Chave: NoSql Databases. Metamodelo. Migração de Dados.
Abstract: Since its origin the NoSql Database have achieved widespread use. Due to the lack of standards for development in this new technology great challenges emerges. Among these challenges, the data migration between the various solutions has proved particularly difficult. There are heterogeneous datamodels, access languages and frameworks available, which makes data migration even more complex. Most part of the solutions available today focus on providing an abstract and generic representation for all data models. These solutions focus in design adapters to homogeneously access the data, but not to specifically implement transformations between them. These approaches often need a framework to access the data, which may prevent from using them in some scenarios. This dissertation proposes the creation of a metamodel and a series of rules capable of assisting in the data migration task. The data can be converted to various desired formats through an intermediate state. To validate the solution several tests were performed with different systems and using real data available. Key-words: NoSql Databases. Metamodel. Data Migration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Vemulapalli, Eswar Venkat Ram Prasad 1976. "Architecture for data exchange among partially consistent data models". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84814.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.
Includes bibliographical references (leaves 75-76).
by Eswar Venkat Ram Prasad Vemulapalli.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Andersen, Tobias Peulicke. "Midnight : The 3d creation process". Thesis, Högskolan på Gotland, Institutionen för speldesign, teknik och lärande, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-683.

Texto completo da fonte
Resumo:
In this report I am going to go through the process of creating the 3d graphics for the game Midnight produced at Gotland university spring 2010. Midnight is a real time strategy game who utilizes the rock, paper, scissor principal in the balancing of the teams. My main focus will be the pipeline of the workflow that was used during production of the graphics. What worked and what didn´t and how to make it more efficient. In the method part of this report, I am going to explain how the pipeline looked and worked. The process that was used started with the production of concept art, turnarounds were produced and it was then modeled and textured. When it was complete, rigging and animation were commenced, and the artifacts were then put into the game. The pipeline worked rather good later in the production, but was inefficient in the beginning because of other projects and courses colliding with this one. There were some problems with the structuring of the different teams. Everybody wanted to be a part of everything. This led to inefficiency. It could be resolved with stricter and better structure of the teams in the group.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Hall, Richard. "A computational story model based on a story grammar that represents conflict". Thesis, Federation University Australia, 2002. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/97261.

Texto completo da fonte
Resumo:
"The work in this thesis investigates whether a computational story model can be formulated that can overcome the limitations of existing story models and also interact with stories in multiple ways, similar to the ways in which people interact with them."
Doctor of Philosophy
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Gajewski, Jedrzej M. "On-the-fly creation of three-dimensional models using VRML". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0001/MQ31575.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Varley, Peter Ashley Clifford. "Automatic creation of boundary-representation models from single line drawings". Thesis, Cardiff University, 2003. http://orca.cf.ac.uk/107713/.

Texto completo da fonte
Resumo:
This thesis presents methods for the automatic creation of boundary-representation models of polyhedral objects from single line drawings depicting the objects. This topic is important in that automated interpretation of freehand sketches would remove a bottleneck in current engineering design methods. The thesis does not consider conversion of freehand sketches to line drawings or methods which require manual intervention or multiple drawings. The thesis contains a number of novel contributions to the art of machine interpretation of line drawings. Line labelling has been extended by cataloguing the possible tetrahedral junctions and by development of heuristics aimed at selecting a preferred labelling from many possible. The ”bundling” method of grouping probably-parallel lines, and the use of feature detection to detect and classify hole loops, are both believed to be original. The junction-line-pair formalisation which translates the problem of depth estimation into a system of linear equations is new. Treating topological reconstruction as a tree-search is not only a new approach but tackles a problem which has not been fully investigated in previous work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Cockey, Sean M. (Sean Michael). "Software for facilitating the creation of parametric urban resource models". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92175.

Texto completo da fonte
Resumo:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, June 2014.
Cataloged from PDF version of thesis. "June 2014."
Includes bibliographical references (pages 71-72).
This thesis describes a new software tool to facilitate Parametric Urban Resource Modeling, a method for quantitatively studying and improving the distribution of resources in a city. The software is intended to help users with no CAD or programming experience make digital Parametric Urban Resource Models of cities and optimize them algorithmically using a pre-determined set of rules. These models may help urban planners understand how to raise the population density of an area while maintaining or improving its livability, a critical challenge in our rapidly-urbanizing world. Preliminary feedback regarding this limited early prototype of the software has been promising. Parametric Urban Resource Modeling software such as this may eventually become an important early step in the urban design process, potentially saving time and improving the quality of the final city designs.
by Sean M. Cockey.
S.B.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Abbiw-Jackson, Roselyn Mansa. "Discrete optimization models in data visualization". College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1987.

Texto completo da fonte
Resumo:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Applied Mathematics and Scientific Computation. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Tronicke, Jens. "Patterns in geophysical data and models". Universität Potsdam, 2006. http://www.uni-potsdam.de/imaf/events/ge_work0602.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Markowetz, Florian. "Probabilistic models for gene silencing data". [S.l.] : [s.n.], 2005. http://www.diss.fu-berlin.de/2006/247/index.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Zingmark, Per-Henrik. "Models for Ordered Categorical Pharmacodynamic Data". Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis: Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6125.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Karlsson, Johan T. "On data structures and memory models /". Luleå : Department of Computer Science and Electrical Engineering, Luleå University of Technology, 2006. http://epubl.ltu.se/1402-1544/2006/24/index.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Penton, Dave. "Linguistic data models : presentation and representation /". Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/00002875.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Scheid, Sandro. "Selection models for nonignorable missing data /". Frankfurt am Main : Lang, 2005. http://www.gbv.de/dms/zbw/477547427.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Frühwirth-Schnatter, Sylvia. "Data Augmentation and Dynamic Linear Models". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1992. http://epub.wu.ac.at/392/1/document.pdf.

Texto completo da fonte
Resumo:
We define a subclass of dynamic linear models with unknown hyperparameters called d-inverse-gamma models. We then approximate the marginal p.d.f.s of the hyperparameter and the state vector by the data augmentation algorithm of Tanner/Wong. We prove that the regularity conditions for convergence hold. A sampling based scheme for practical implementation is discussed. Finally, we illustrate how to obtain an iterative importance sampling estimate of the model likelihood. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Suarez, Jose. "Data-true characterization of neuronal models". Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5059.

Texto completo da fonte
Resumo:
In this thesis, a weighted least squares approach is initially presented to estimate the parameters of an adaptive quadratic neuronal model. By casting the discontinuities in the state variables at the spiking instants as an impulse train driving the system dynamics, the neuronal output is represented as a linearly parameterized model that depends on filtered versions of the input current and the output voltage at the cell membrane. A prediction error-based weighted least squares method is formulated for the model. This method allows for rapid estimation of model parameters under a persistently exciting input current injection. Simulation results show the feasibility of this approach to predict multiple neuronal firing patterns. Results of the method using data from a detailed ion-channel based model showed issues that served as the basis for the more robust resonate-and-fire model presented. A second method is proposed to overcome some of the issues found in the adaptive quadratic model presented. The original quadratic model is replaced by a linear resonate-and-fire model -with stochastic threshold- that is both computational efficient and suitable for larger network simulations. The parameter estimation method presented here consists of different stages where the set of parameters is divided in to two. The first set of parameters is assumed to represent the subthreshold dynamics of the model, and it is estimated using a nonlinear least squares algorithm, while the second set is associated with the threshold and reset parameters as its estimated using maximum likelihood formulations. The validity of the estimation method is then tested using detailed Hodgkin-Huxley model data as well as experimental voltage recordings from rat motoneurons.
ID: 030423378; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.E.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 48-51).
M.S.E.E.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Lewis, Michael. "Data compression for digital elevation models". Thesis, University of South Wales, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265470.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Kapoor, Ashish 1977. "Learning discriminative models with incomplete data". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34181.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (p. 115-121).
Many practical problems in pattern recognition require making inferences using multiple modalities, e.g. sensor data from video, audio, physiological changes etc. Often in real-world scenarios there can be incompleteness in the training data. There can be missing channels due to sensor failures in multi-sensory data and many data points in the training set might be unlabeled. Further, instead of having exact labels we might have easy to obtain coarse labels that correlate with the task. Also, there can be labeling errors, for example human annotation can lead to incorrect labels in the training data. The discriminative paradigm of classification aims to model the classification boundary directly by conditioning on the data points; however, discriminative models cannot easily handle incompleteness since the distribution of the observations is never explicitly modeled. We present a unified Bayesian framework that extends the discriminative paradigm to handle four different kinds of incompleteness. First, a solution based on a mixture of Gaussian processes is proposed for achieving sensor fusion under the problematic conditions of missing channels. Second, the framework addresses incompleteness resulting from partially labeled data using input dependent regularization.
(cont.) Third, we introduce the located hidden random field (LHRF) that learns finer level labels when only some easy to obtain coarse information is available. Finally the proposed framework can handle incorrect labels, the fourth case of incompleteness. One of the advantages of the framework is that we can use different models for different kinds of label errors, providing a way to encode prior knowledge about the process. The proposed extensions are built on top of Gaussian process classification and result in a modular framework where each component is capable of handling different kinds of incompleteness. These modules can be combined in many different ways, resulting in many different algorithms within one unified framework. We demonstrate the effectiveness of the framework on a variety of problems such as multi-sensor affect recognition, image classification and object detection and segmentation.
by Ashish Kapoor.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Williamson, Sinead Anne. "Nonparametric Bayesian models for dependent data". Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610373.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia