Dissertations / Theses on the topic 'Exploration models'

To see the other types of publications on this topic, follow the link: Exploration models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Exploration models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gross, Joshua. "An exploration of stochastic models." Kansas State University, 2014. http://hdl.handle.net/2097/17656.

Full text
Abstract:
Master of Science
Department of Mathematics
Nathan Albin
The term stochastic is defined as having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely. A stochastic model attempts to estimate outcomes while allowing a random variation in one or more inputs over time. These models are used across a number of fields from gene expression in biology, to stock, asset, and insurance analysis in finance. In this thesis, we will build up the basic probability theory required to make an ``optimal estimate", as well as construct the stochastic integral. This information will then allow us to introduce stochastic differential equations, along with our overall model. We will conclude with the "optimal estimator", the Kalman Filter, along with an example of its application.
APA, Harvard, Vancouver, ISO, and other styles
2

Carter, Faye Isobel. "Exploration of siblings' explanatory models of autism." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581479771&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamphans, Thomas. "Models and algorithms for online exploration and search." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980408121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gosch, Aron. "Exploration of 5G Traffic Models using Machine Learning." Thesis, Linköpings universitet, Databas och informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168160.

Full text
Abstract:
The Internet is a major communication tool that handles massive information exchanges, sees a rapidly increasing usage, and offers an increasingly wide variety of services.   In addition to these trends, the services themselves have highly varying quality of service (QoS), requirements and the network providers must take into account the frequent releases of new network standards like 5G. This has resulted in a significant need for new theoretical models that can capture different network traffic characteristics. Such models are important both for understanding the existing traffic in networks, and to generate better synthetic traffic workloads that can be used to evaluate future generations of network solutions using realistic workload patterns under a broad range of assumptions and based on how the popularity of existing and future application classes may change over time. To better meet these changes, new flexible methods are required. In this thesis, a new framework aimed towards analyzing large quantities of traffic data is developed and used to discover key characteristics of application behavior for IP network traffic. Traffic models are created by breaking down IP log traffic data into different abstraction layers with descriptive values. The aggregated statistics are then clustered using the K-means algorithm, which results in groups with closely related behaviors. Lastly, the model is evaluated with cluster analysis and three different machine learning algorithms to classify the network behavior of traffic flows. From the analysis framework a set of observed traffic models with distinct behaviors are derived that may be used as building blocks for traffic simulations in the future. Based on the framework we have seen that machine learning achieve high performance on the classification of network traffic, with a Multilayer Perceptron getting the best results. Furthermore, the study has produced a set of ten traffic models that have been demonstrated to be able to reconstruct traffic for various network entities.

Due to COVID-19 the presentation was performed over ZOOM.

APA, Harvard, Vancouver, ISO, and other styles
5

Havercroft, William G. "Exploration of marginal structural models for survival outcomes." Thesis, University of Bristol, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.684750.

Full text
Abstract:
A marginal structural model parameterises the distribution of an outcome given a treatment intervention, where such a distribution is the fundamental probabilistic representation of the causal effect of treatment on the outcome. Causal inference methods are designed to consistently estimate aspects of these causal distributions, in the presence of interference from non-causal associations which typically occur in observational data. One such method, which involves the application of inverse probability of treatment weights, directly targets the parameters of marginal structural models. The asymptotic properties and practical applicability of this method are well established, but little attention has been paid to its finite-sample performance. This is because simulating data from known distributions which are entirely suitable for such investigations generally presents a significant challenge, especially in scenarios where the outcome is survival time. We illuminate these issues, and propose and implement certain solutions, considering separately the cases of static (pre-determined) and dynamic (tailored) treatment interventions. In so doing, we explore both theoretical and practical aspects of marginal structural models for survival outcomes, and the associated inference method.
APA, Harvard, Vancouver, ISO, and other styles
6

Mulchahey, Kenneth E. "Exploration of Complexities for Migration of Software-Licensing Models." Thesis, Capella University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13806407.

Full text
Abstract:

Some independent software vendors might not endure the reduction in revenue and increased costs associated when they switch software license models. The lack of identified and prioritized complexities might lead to potential increased revenue loss or prevent small and medium-sized independent software vendors within the United States from migrating from perpetual software licensing to subscription-based models. The purpose of this qualitative case study was to explore, document, and describe organizational complexities and their prioritization in contributing to the failure or success of software license model migration. The research questions for this qualitative case study included a Primary Research Question: What complexities can senior management of small and medium-sized independent software vendors (ISV) encounter when migrating from a perpetual license model to a subscription-based licensing model? Supporting Research Question 1: What is the prioritization of the complexities determined to be a factor in the migration of software licensing models? Supporting Research Question 2: How do identifying and prioritizing complexities affect decision-making to mitigate potential initial revenue loss? For this study, eight managers were recruited from a small to medium sized independent software vendor. Specifically, participants in the sample were managers who have worked within the software industry for at least four years, had knowledge of the company’s existing software license model, and were involved in the consideration of migrating from a perpetual license model to a subscription-based licensing model. The data collection methods for this research were face-to-face interviews, a focus group, and direct observations. The multi-criteria decision analysis theory and diffusion of influence theory served as the conceptual framework for this research. The framework provided a model for software vendor executives to identify and prioritize complexities and reduce the initial loss of revenue during license migration. Eight themes emerged: financial, go-to-market, infrastructure, reorganization, security, training, and unknown strategy. There was a consistency between the themes and literature. The data was consistent with the multi-criteria decision and diffusion of influence theories. The results indicated four key findings: support functions were less aware of complexities, no evidence of a clear strategic plan was present, the most significant complexity anticipated was the go-to-market complexity, and there was a direct effect on decision-making in identifying and prioritizing complexities. Exploring and understanding the totality of complexities an independent software vendor may encounter, the prioritization of those complexities, and adjusting decision-making to compensate for those complexities, while establishing and following a communicated strategic plan may significantly reduce the potential for financial loss, increase market positioning and competitive advantage. The results and limitations may provide areas for future research. Future studies should seek to conduct similar studies with multiple independent software vendors to provide additional levels of validation and reliability. Such studies should include independent software vendors who have successfully and unsuccessfully migrated license models.

APA, Harvard, Vancouver, ISO, and other styles
7

Gaier, Adam. "Accelerating Evolutionary Design Exploration with Predictive and Generative Models." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0087.

Full text
Abstract:
L'optimisation joue un rôle essentiel dans la conception industrielle, mais ne se limite pas à la minimisation d'une simple fonction, comme le coût ou la résistance. Ces outils sont également utilisés dans les phases conceptuelles, pour mieux comprendre ce qui est possible. Pour soutenir cette exploration, nous nous concentrons sur les algorithmes de diversité de qualité (QD), qui produisent des ensembles de solutions variées et performantes. Ces techniques nécessitent souvent l'évaluation de millions de solutions, ce qui les rend peu pratiques dans les cas de conception. Dans cette thèse, nous proposons des méthodes pour améliorer radicalement l'efficacité des données de la QD avec l'apprentissage machine, permettant son application à la conception. Dans notre première contribution, nous développons une méthode de modélisation des performances des réseaux neuronaux évolués utilisés pour le contrôle et la conception. Les structures de ces réseaux se développent et changent, ce qui les rend difficiles à modéliser - mais grâce à une nouvelle méthode, nous sommes en mesure d'estimer leurs performances en fonction de leur hérédité, améliorant ainsi l'efficacité des données à plusieurs reprises. Dans notre deuxième contribution, nous combinons l'optimisation basée sur un modèle avec MAP-Elites, un algorithme QD. Un modèle de performance est créé à partir de modèles connus, et MAP-Elites crée un nouvel ensemble de modèles en utilisant cette approximation. Un sous-ensemble de ces conceptions est évalué pour améliorer le modèle, et le processus se répète. Nous montrons que cette approche améliore l'efficacité de MAP-Elites par des ordres de grandeur. Notre troisième contribution intègre des modèles générateurs dans MAP-Elites pour apprendre des codages spécifiques à un domaine. Un auto-codeur variationnel est formé sur les solutions produites par MAP-Elites, capturant la "recette" commune pour une haute performance. Ce codage appris peut ensuite être réutilisé par d'autres algorithmes pour une optimisation rapide, y compris MAP-Elites. Tout au long de cette thèse, bien que notre vision se concentre sur la conception, nous examinons les applications dans d'autres domaines, comme la robotique. Ces avancées ne sont pas exclusives à la conception, mais servent de travail de base à l'intégration de la DQ et de l'apprentissage machine
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning
APA, Harvard, Vancouver, ISO, and other styles
8

Garcia, Gomez David. "Exploration of customer churn routes using machine learning probabilistic models." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144660.

Full text
Abstract:
The ongoing processes of globalization and deregulation are changing the competitive framework in the majority of economic sectors. The appearance of new competitors and technologies entails a sharp increase in competition and a growing preoccupation among service providing companies with creating stronger bonds with customers. Many of these companies are shifting resources away from the goal of capturing new customers and are instead focusing on retaining existing ones. In this context, anticipating the customer¿s intention to abandon, a phenomenon also known as churn, and facilitating the launch of retention-focused actions represent clear elements of competitive advantage. Data mining, as applied to market surveyed information, can provide assistance to churn management processes. In this thesis, we mine real market data for churn analysis, placing a strong emphasis on the applicability and interpretability of the results. Statistical Machine Learning models for simultaneous data clustering and visualization lay the foundations for the analyses, which yield an interpretable segmentation of the surveyed markets. To achieve interpretability, much attention is paid to the intuitive visualization of the experimental results. Given that the modelling techniques under consideration are nonlinear in nature, this represents a non-trivial challenge. Newly developed techniques for data visualization in nonlinear latent models are presented. They are inspired in geographical representation methods and suited to both static and dynamic data representation.
APA, Harvard, Vancouver, ISO, and other styles
9

Kamphans, Tom [Verfasser]. "Models and Algorithms for Online Exploration and Search / Tom Kamphans." Aachen : Shaker, 2011. http://d-nb.info/1098040260/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ost, Luciano Copello. "Abstract models of NoC-based MPSoCs for design space exploration." Pontifícia Universidade Católica do Rio Grande do Sul, 2010. http://hdl.handle.net/10923/1663.

Full text
Abstract:
Made available in DSpace on 2013-08-07T18:43:30Z (GMT). No. of bitstreams: 1 000425177-Texto+Completo-0.pdf: 2930765 bytes, checksum: 146324f55fdecec85040eaa6120e58f4 (MD5) Previous issue date: 2010
NoC-based MPSoCs can provide massive computing power on a single chip, achieving hundreds of billions of operations per second by employing dozens of processing cores that communicate over a packet-switched network at a rate that exceeds 100 Tbps. Such devices can support the convergence of several appliances (e. g. HDTV, multiple wireless communication standards, media players, gaming) due to their comparatively high performance, flexibility and power efficiency. Due to the vast design space alternatives, evaluating the NoC-based MPSoCs at lower abstraction levels does not provide the required support to find out the most efficient NoC architecture considering the performance constraints (e. g. latency, power) of a given application at early design process stages. Thus, NoC-based MPSoCs design requires simple and accurate high level models in order to achieve precise performance results, of each design alternative, in an acceptable design time. In this context, the present Thesis has two main contributions: (i) development of abstract NoC models, providing accurate performance evaluation; and (ii) integration of the proposed models into a model-based design flow, allowing the design space exploration of NoC-based MPSoCs at early stages of the design flow.
MPSoCs baseados em NoCs podem fornecer alto desempenho em um único circuito integrado, atingindo centenas de bilhões de operações por segundo através do emprego de múltiplos elementos de processamento que se comunicam através de uma NoC operando a uma freqüência que excede 100 Tbps. Tais dispositivos podem suportar a execução simultânea de múltiplas aplicações (e. g. HDTV, múltiplos padrões de comunicação sem fio, tocadores multimídia, jogos), devido a características como alto desempenho, flexibilidade e eficiência em termos de consumo de energia. Devido a quantidade de alternativas inerentes ao grande espaço de projeto, a avaliação de MPSoCs baseados em NoCs em baixo níveis de abstração não prove o suporte necessário para encontrar a melhor arquitetura para a NoC considerando métricas de desempenho (e. g. latência, potência) de uma dada aplicação nas fases iniciais de projeto. Dessa forma, o projeto de MPSoCs baseados em NoCs requer modelos simples e precisos em alto nível de abstração, os quais possam gerar resultados precisos de desempenho, de cada alternativa de projeto, em um tempo de projeto razoável. Neste contexto, a presente Tese tem duas contribuições principais: (i) desenvolvimento de modelos de NoC abstratos, e (ii) integração dos modelos propostos dentro de um fluxo de projeto baseado em modelos, permitindo assim a exploração do espaço de projeto de MPSoCs baseados em NoCs nas fases iniciais do fluxo projeto.
APA, Harvard, Vancouver, ISO, and other styles
11

Ablin, Pierre. "Exploration of multivariate EEG /MEG signals using non-stationary models." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT051.

Full text
Abstract:
L'Analyse en Composantes Indépendantes (ACI) modèle un ensemble de signaux comme une combinaison linéaire de sources indépendantes. Cette méthode joue un rôle clé dans le traitement des signaux de magnétoencéphalographie (MEG) et électroencéphalographie (EEG). L'ACI de tels signaux permet d'isoler des sources de cerveau intéressantes, de les localiser, et de les séparer d'artefacts. L'ACI fait partie de la boite à outils de nombreux neuroscientifiques, et est utilisée dans de nombreux articles de recherche en neurosciences. Cependant, les algorithmes d'ACI les plus utilisés ont été développés dans les années 90. Ils sont souvent lents lorsqu'ils sont appliqués sur des données réelles, et sont limités au modèle d'ACI classique.L'objectif de cette thèse est de développer des algorithmes d'ACI utiles en pratique aux neuroscientifiques. Nous suivons deux axes. Le premier est celui de la vitesse : nous considérons le problème d'optimisation résolu par deux des algorithmes les plus utilisés par les praticiens: Infomax et FastICA. Nous développons une nouvelle technique se basant sur un préconditionnement par des approximations de la Hessienne de l'algorithm L-BFGS. L'algorithme qui en résulte, Picard, est conçu pour être appliqué sur données réelles, où l'hypothèse d’indépendance n'est jamais entièrement vraie. Sur des données de M/EEG, il converge plus vite que les implémentations `historiques'.Les méthodes incrémentales, qui traitent quelques échantillons à la fois au lieu du jeu de données complet, constituent une autre possibilité d’accélération de l'ACI. Ces méthodes connaissent une popularité grandissante grâce à leur faculté à bien passer à l'échelle sur de grands jeux de données. Nous proposons un algorithme incrémental pour l'ACI, qui possède une importante propriété de descente garantie. En conséquence, cet algorithme est simple d'utilisation, et n'a pas de paramètre critique et difficile à régler comme un taux d'apprentissage.En suivant un second axe, nous proposons de prendre en compte du bruit dans le modèle d'ACI. Le modèle resultant est notoirement difficile et long à estimer sous l'hypothèse standard de non-Gaussianité de l'ACI. Nous nous reposons donc sur une hypothèse de diversité spectrale, qui mène à un algorithme facile d'utilisation et utilisable en pratique, SMICA. La modélisation du bruit permet de nouvelles possibilités inenvisageables avec un modèle d'ACI classique, comme une estimation fine des source et l'utilisation de l'ACI comme une technique de réduction de dimension statistiquement bien posée. De nombreuses expériences sur données M/EEG démontrent l'utilité de cette nouvelle approche.Tous les algorithmes développés dans cette thèse sont disponibles en accès libre sur internet. L’algorithme Picard est inclus dans les librairies de traitement de données M/EEG les plus populaires en Python (MNE) et en Matlab (EEGlab)
Independent Component Analysis (ICA) models a set of signals as linear combinations of independent sources. This analysis method plays a key role in electroencephalography (EEG) and magnetoencephalography (MEG) signal processing. Applied on such signals, it allows to isolate interesting brain sources, locate them, and separate them from artifacts. ICA belongs to the toolbox of many neuroscientists, and is a part of the processing pipeline of many research articles. Yet, the most widely used algorithms date back to the 90's. They are often quite slow, and stick to the standard ICA model, without more advanced features.The goal of this thesis is to develop practical ICA algorithms to help neuroscientists. We follow two axes. The first one is that of speed. We consider the optimization problems solved by two of the most widely used ICA algorithms by practitioners: Infomax and FastICA. We develop a novel technique based on preconditioning the L-BFGS algorithm with Hessian approximation. The resulting algorithm, Picard, is tailored for real data applications, where the independence assumption is never entirely true. On M/EEG data, it converges faster than the `historical' implementations.Another possibility to accelerate ICA is to use incremental methods, which process a few samples at a time instead of the whole dataset. Such methods have gained huge interest in the last years due to their ability to scale well to very large datasets. We propose an incremental algorithm for ICA, with important descent guarantees. As a consequence, the proposed algorithm is simple to use and does not have a critical and hard to tune parameter like a learning rate.In a second axis, we propose to incorporate noise in the ICA model. Such a model is notoriously hard to fit under the standard non-Gaussian hypothesis of ICA, and would render estimation extremely long. Instead, we rely on a spectral diversity assumption, which leads to a practical algorithm, SMICA. The noise model opens the door to new possibilities, like finer estimation of the sources, and use of ICA as a statistically sound dimension reduction technique. Thorough experiments on M/EEG datasets demonstrate the usefulness of this approach.All algorithms developed in this thesis are open-sourced and available online. The Picard algorithm is included in the largest M/EEG processing Python library, MNE and Matlab library, EEGlab
APA, Harvard, Vancouver, ISO, and other styles
12

Veselinović, Milica. "Genetic models for epithermal gold deposits and applications to exploration." Thesis, Rhodes University, 1992. http://hdl.handle.net/10962/d1005562.

Full text
Abstract:
Epithermal gold deposits are the product of large-scale hydrothermal systems in tectonically active regions. They form at shallow crustal levels where the physico-chemical conditions change abruptly. Two major groups of epithermal gold deposits can be distinguished based on their genetic connection with: A) Copper-molybdenum porphyry systems and B) Geothermal systems related to volcanic centres and calderas. Epithermal gold deposits connected with geothermal systems encompass three major types: adularia-sericite, acid-sulphate and disseminated replacement (the Carlin-type). Their essential ingredients are: high heat source which leads to convection of groundwater in the upper crust; source of hydrothermal fluid, metals and reduced sulphur; and high-permeability structures which allow fluid convection and metal deposition. Mixing of these ingredients leads to the formation of epithermal gold deposits throughout crustal history, without any restriction on age. The ores were deposited from near-neutral (adularia-sericite type and some of the Carlin-type) to acidic (acid-sulphate type and porphyry-related epithermal gold deposits), low-salinity, high C0₂ and high H₂S fluids, which were predominantly meteoritic in origin. The transport capability of deep fluids in epithermal hydrothermal systems may be shown to be dependent largely on their H₂S content and, through a series of fluid mineral equilibria, on temperature and on C0₂ content. The most common mechanisms of ore deposition are boiling (phase separation), mixing of fluids of different temperatures and salinities, reaction between them and wall rocks, dilution and cooling. An understanding of genetic models for epithermal gold deposits provides the basis for the selection of favourable areas for regional to prospect-scale exploration.
APA, Harvard, Vancouver, ISO, and other styles
13

Coutts, Jacob J. "Contrasting Contrasts: An Exploration of Methods for Comparing Indirect Effects in Mediation Models." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1605579439442685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Martinez, Ricardo. "Knowledge integration models for mining gene expression data." Nice, 2007. http://www.theses.fr/2007NICE4060.

Full text
Abstract:
Dans cette thèse, nous présentons une structure qui comprend tous les méthodes développées pour interpréter des résultats d'expression des gènes en incorporant des annotations sur les gènes. Puis, nous abordons la question de la découverte de « clusters » (algorithmes non-supervisées) parmi des profils d'expression de gène, et nous proposons deux approches spécifiques à ce sujet : CGGA (Co-expressed Gene Groups Analysis) and GENMINER (Gene-integrated analysis using association rules mining). CGGA est une méthode de l'approche a priori qu'intègre l'information issue des données des biopuces, i. E. Les profils d'expression des gènes, avec les annotations fonctionnelles des gènes issues des différentes sources d'information génomique tel que Gène Ontologie. GENMINER est une méthode de co-clustering basé dans l'extraction de règles d'association qu'intègre l'information des profils d'expression des gènes (discrétisées) a partir de différentes sources d'information biologique sur les gènes (en incluant la totalité de l'information minimale contenue dans la biopuce). A la fin nous ciblons la question de la découverte de classes par des méthodes supervisés, a ce sujet nous proposons GENETREE (GENE-integrated analysis for biological sample prediction using decision TREEs). GENETREE est une méthode de co-clustering basé dans les arbres de décision qui permet d'intégrer les profils d'expression des gènes et l'information contenue dans les sources d'information biologique relative aux voies métaboliques (en tenant en compte la variable temporelle du processus biologique. Les expérimentations menées avec les trois méthodes ont permis de mettre en évidence les principaux groupes de gènes fonctionnellement riches et co-exprimés dans les différents jeux de données d’expression des gènes qui ont été analysées
In this thesis, we first present an original point of view for the state of the art on methods developed for interpreting gene expression results through corresponding gene annotations. Then, we tackle the non-supervised learning issue of class discovery among gene expression profiles, and we propose two specific approaches on this subject: CGGA (Co-expressed Gene Groups Analysis) and GENMINER (Gene-integrated analysis using association rules mining). CGGA is a knowledge-based approach which automatically integrates gene expression profiles and gene annotations obtained from genome-wide information databases such as Gene Ontology. GENMINER is a co-clustering and bi-clustering approach which automatically integrates at once gene annotations and gene expression profiles to discover intrinsic associations between these two heterogeneous sources of information. Finally, we focus on the supervised learning issue of class prediction, and we propose GENETREE (GENE-integrated analysis for biological sample prediction using decision TREEs), an approach which takes advantage of the well known decision tree algorithm C5. 0 and adapts its entropy splitting principle with several ontology-based criteria
APA, Harvard, Vancouver, ISO, and other styles
15

He, Wenbin. "Exploration and Analysis of Ensemble Datasets with Statistical and Deep Learning Models." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574695259847734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pestill, Melissa E. "Exploration of Nurses' Experiences Transitioning to a Team-Nursing Model of Care." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3915.

Full text
Abstract:
In response to the needs of patients, coupled with nursing workforce predictions and the pressure of cost containment, a shift to a new team nursing model of care has been seen in Canada and Australia. Today's patients require multiple resources, nurses with additional skillsets and vast amounts of experience during their hospital stays, and a team of nurses can meet these needs. This project explored the experiences and perspectives of nurses during the implementation of a team nursing model of care on a 32-bed, inpatient, cardiology floor in southern Ontario. The purposes of this project were to conduct a formative evaluation of the pilot unit implementation and make recommendations for future units who will implement this change in model. The project tracked all nurses on the pilot unit, from frontline nurses to those of influence and authority. Guided by an action research framework and a qualitative approach, nurses' experiences were explored through observations and analysis of organizational reports. These data were triangulated and further validated with evidence from the current literature. Major themes included the need for clear definitions of roles and responsibilities, a strong organizational support system, and the recognition that team nursing was more than a division of tasks but was a shift in culture to that of shared responsibility and accountability for all patients. These findings have implications for positive social change by informing the work of those in the health care setting, illuminating the benefits of team-based nursing.
APA, Harvard, Vancouver, ISO, and other styles
17

Koshy, Jacob. "An exploration of the use in practice of credit risk models." Thesis, Kingston University, 2012. http://eprints.kingston.ac.uk/23705/.

Full text
Abstract:
Credit risk is treated as a major risk in banks and has become more important with the 2008 financial crisis and the subsequent regulatory controls, mainly in the form of new changes in Basel II and the proposed Basel III requirements. The use of credit risk models grew in the 2000s due to both the use of internal models in Basel II as well as bank use for economic capital calculations. These models have a large and growing influence on how credit risks are managed, yet there is a gap in the current literature on how these models are used in practice. This research explores their use in banks to help provide academic and management insight into the actual use of credit risk models. An interpretative approach using qualitative case study was undertaken in three banks using face-to-face interviews with the key credit risk managers that worked in the methodology, decision making, monitoring, control and reporting areas. While interviews were the main source of data for the research, it was supported by observation and a review of documentation that related to the use of credit risk models in the bank. The research findings show the merits in examining the social, organisational and cultural constructions as well as the role of individuals in this process. This evidences the usefulness of interpretive research, which thrives on diversity of meanings as opposed to comparing just the technical aspects of the models as found in more traditional studies. This research provides a contribution to the academic understanding of the use of credit risk models not found in any of the studies to date. This includes new insights into the use of qualitative information, the use of expert judgement (including an element of gut feel), how model complexity can detract from model use and the importance of aligning models to the risk appetite of the bank. These findings are significant both from an academic and practitioner aspect as they open up commonly-hidden processes on how these models are used in practice.
APA, Harvard, Vancouver, ISO, and other styles
18

Agwamba, Kennedy. "An Interactive Tool for the Computational Exploration of Integrodifference Population Models." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/hmc_theses/70.

Full text
Abstract:
Mathematical modeling of population dynamics can provide novel insight to the growth and dispersal patterns for a variety of species populations, and has become vital to the preservation of biodiversity on a global-scale. These growth and dispersal stages can be modeled using integrodifference equations that are discrete in time and continuous in space. Previous studies have identified metrics that can determine whether a given species will persist or go extinct under certain model parameters. However, a need for computational tools to compute these metrics has limited the scope and analysis within many of these studies. We aim to create computational tools that facilitate numerical explorations for a number of associated integrodifference equations, allowing modelers to explore results using a selection of models under a robust parameter set.
APA, Harvard, Vancouver, ISO, and other styles
19

Peña, Araya Vanessa Carolina. "Spatio-temporal historical event visual exploration through social media-based models." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168484.

Full text
Abstract:
Tesis para optar al grado de Doctora en Ciencias, Mención Computación
Las plataformas de redes sociales en lı́nea sirven como importantes fuentes de información acerca de lo que está pasando en el mundo y cómo la gente reacciona a estos eventos. Dentro de toda la información útil que los cientı́ficos han extraı́do de estos repositorios, el análisis de mensajes relacionados con eventos del mundo real son una importante oportunidad para realizar análisis histórico de noticias. Como los mensajes publicados en estas plataformas contienen distintos puntos de vista de una noticias, contribuyen con información que quizás no haya sido publicada por los medios tradicionales. Dentro de los aspectos que se pueden estudiar de un evento noticioso, las relaciones geopolı́ticas como consecuencia de ellos contienen información valiosa para análisis histórico futuro. En efecto, entender las relaciones entre paı́ses, su desarrollo en el tiempo y cómo las personas reaccionaron a ellas es esencial para comprender el presente. Sin embargo, extraer información útil desde estas plataformas no es una tarea fácil dada la creciente velocidad de publicación de sus mensajes, lo no estruturado de su contenido y la enormidad de repositorios que generan. Por otra parte, para extraer conocimiento nuevo se necesitan herramientas que permitan la generación de hipótesis nuevas por parte de expertos en un dominio. Esta necesidad de colaboración entre sistemas computacionales y usuarios finales hace que el problema tenga dos componentes. El primer componente es que los datos pueden ser difı́ciles de guardar, recuperar y procesar sin las representaciones adecuadas de alto nivel. El segundo componente es que explorar con ojos humanos un gran número de mensajes puede ser imposible sin las herramientas adecuadas. El objetivo de esta tesis es abordar estos dos problemas. El primer problema, relacionado con la eficiencia del procesamiento computacional de los datos, se aborda presentando una representación de alto nivel de eventos noticiosos en su contexto geopolı́tico. Más especı́ficamente, proponemos una representación de eventos consciente del contexto espacio temporal que incorpora tanto la información de las ubicaciones que están involucradas en el mundo real como de aquellas hasta donde se propagó el evento. Exploramos la utilidad de este modelo usando datos de eventos noticiosos extraı́dos desde Twitter en una ventana de tiempo de dos años. Abordamos el segundo problema, relacionado con la exploración de mensajes por expertos en un dominio, diseñando herramientas visuales para exporarlas. Primero diseñamos una interfaz web visual, llamada Galean, que permite a usuarios explorar noticias dada la representación de eventos anteriormente mencionada. Evaluamos esta interfaz a través de un estudio cualitativo con potenciales usuarios finales y uno cuantitativo con 30 participantes. Dada la retroalimentación recibida en esas instancias, diseñamos y evaluamos una nueva manera de visualizar datos geográficos y temporales llamada Cartoglyphs.
CONICYT, Instituto Milenio en Fundamentos de los Datos
APA, Harvard, Vancouver, ISO, and other styles
20

Gonçalves, Ana Sofia Ribeiro. "Validação do modelo de ansiedade Light / Dark Exploration Test em ratos Wistar." Master's thesis, Instituto Superior de Psicologia Aplicada, 2007. http://hdl.handle.net/10400.12/558.

Full text
Abstract:
Dissertação de Mestrado em Etologia
Os modelos elevated plus-maze e light/dark são amplamente utilizados no estudo de compostos ansiolíticos e dos mecanismos neurobiológicos de ansiedade. O elevated plus-maze (EPM) e o open-field (OF) são testes validados para avaliação da ansiedade em ratos e ratinhos. O teste ligh/dark (LD) também é utilizado nos modelos de ansiedade, no entanto, só foi validado para modelos em ratinhos. Para validar o teste LD como um modelo de ansiedade em ratos avaliaram-se os parâmetros: validade construtiva, validade exterior e validade preditiva. Estes parâmetros foram analisados através de avaliações comportamentais. neuroquímicas e hormonais na presença e ausência de compostos ansiolíticos, e de uma análise comparativa destas avaliações nos testes EPM e LD. Foi ainda estabelecida uma análise comparativa dos dados comportamentais entre o teste OF e os testes EPM e LD. Cada rato (estirpe Wistar) (200-220g) foi colocado no centro do aparelho OF e deixado a explorá-lo durante 20 minutos. Os primeiros 5 minutos foram utilizados para avaliar os comportamentos de ansiedade e os últimos 15 minutos para avaliar a actividade locomotora. Uma semana depois cada rato foi colocado no aparelho EPM (n=6) ou no aparelho LD (n=6) durante 5 minutos. Foram ainda submetidos a estes testes, na mesma ordem, indivíduos tratados com diazepam (0,5mg/kg) (n=6 para cada teste) e propranolol (10mg/kg) (n=6 para cada teste) numa solução de soro fisiológico de 2ml/kg e indivíduos tratados com soro fisiológico (n=6 para cada teste). As sessões experimentais foram registadas por uma câmara de vídeo colocada um metro acima do aparelho e as medidas comportamentais foram registadas e analisadas utilizando o software Noldus Observer. Imediatamente após cada sessão de EPM ou LD os animais foram sacrificados, tendo-se recolhido sangue para doseamento hormonal e dissecado a amígdala e o hipocampo para posterior análise neuroquímica. As concentrações de dopamina (DA), serotonina (5-HT) e dos seus metabolitos foram quantificadas por cromatografia líquida de alta performance combinada com detecção electroquímica. Os níveis de corticosterona no plasma foram medidos por ensaio imunoenzimático. Os resultados comportamentais mostram que os testes EPM e LD induzem respostas comportamentais semelhantes sugerindo que os comportamentos registados nos testes reflectem o mesmo estado psicológico - ansiedade. Esta generalização das respostas comportamentais em diferentes modelos de ansiedade contribui para uma possível validade construtiva do modelo LD para ratos. No entanto, este parâmetro de validade é comprometido pelos resultados neuroquímicos destes animais uma vez que não revelam alterações nas concentrações das monoaminas nas áreas cerebrais analisadas. Por outro lado, a semelhança dos resultados neuroquímicos nos testes LD e EPM indica que estes testes são muito influenciáveis pelas condições e procedimentos usados, uma ideia reforçada pelo aumento dos níveis de serotonina no hipocampo dos indivíduos injectados com soro. Estes resultados evidenciam a importância e a necessidade de uniformizar protocolos e de controlar cautelosamente os procedimentos em todas a etapas da experiência, para possibilitar a utilização destes modelos como indutores de ansiedade. O diazepam e o propranolol não causaram efeitos ansiolíticos nas respostas comportamentais, no entanto, não podemos esquecer que as doses aplicadas são inferiores às utilizadas noutros estudos que mostraram efeitos ansiolíticos. Por outro lado, os dados relativos aos efeitos dos agentes ansiolíticos mostram que os fármacos têm efeitos opostos no rato, tanto no EPM como no LD. Além disso, esse efeito reflecte-se a nível comportamental e neuroquímico. A nível comportamental o diazepam induziu uma redução do estado de alerta enquanto que o propanolol induziu um aumento do estado de alerta. A nível neuroquímico o diazepam induziu uma diminuição dos níveis de serotonina no hipocampo ao contrário do propranolol que induziu um aumento dos níveis de serotonina na mesma área cerebral. Estes resultados sugerem que o propranolol parece ter um efeito ansiogénico originado pela manipulação dos animais aliada ao bloqueio, pelo propranolol, dos processos de reconsolidação da memória activa, o que eliminou os efeitos de habituação ao stress induzido pela manipulação. Estas alterações neuroquimicas foram observadas apenas nos ratos expostos ao EPM indicando que o EPM e o LD desencadeiam aspectos distintos de ansiedade. Estes resultados comprometem a validade preditiva do modelo mas os procedimentos realizados são insuficientes para concluir acerca deste parâmetro. Nas respostas hormonais produzidas por estes testes não foi verificado um aumento nos níveis de corticosterona após a exposição aos testes de ansiedade, no entanto, estes dados podem ter sido influenciados por aspectos da variação circadiana o que torna difícil inferir sobre o critério de validade exterior. A heterogeneidade das formas patológicas de ansiedade existentes toma difícil que um modelo de ansiedade reúna os 3 critérios de validade, no entanto, este trabalho indica que o modelo LD não é um modelo de ansiedade ideal para ratos. Os resultados deste estudo são mais uma contribuição para a evidência crescente da importância e necessidade de uma análise complexa baseada em observações comportamentais no aperfeiçoamento da credibilidade e sensibilidade destes modelos animais.
APA, Harvard, Vancouver, ISO, and other styles
21

Kwok, Yuen Wai-yee Victoria, and 郭原慧儀. "An exploration of an integrated service delivery model for the home help service in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B31974776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Saghafi, Salman. "A Framework for Exploring Finite Models." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/458.

Full text
Abstract:
This thesis presents a framework for understanding first-order theories by investigating their models. A common application is to help users, who are not necessarily experts in formal methods, analyze software artifacts, such as access-control policies, system configurations, protocol specifications, and software designs. The framework suggests a strategy for exploring the space of finite models of a theory via augmentation. Also, it introduces a notion of provenance information for understanding the elements and facts in models with respect to the statements of the theory. The primary mathematical tool is an information-preserving preorder, induced by the homomorphism on models, defining paths along which models are explored. The central algorithmic ideas consists of a controlled construction of the Herbrand base of the input theory followed by utilizing SMT-solving for generating models that are minimal under the homomorphism preorder. Our framework for model-exploration is realized in Razor, a model-finding assistant that provides the user with a read-eval-print loop for investigating models.
APA, Harvard, Vancouver, ISO, and other styles
23

Trapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.

Full text
Abstract:
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
APA, Harvard, Vancouver, ISO, and other styles
24

Garcia, Anthony. "The community school concept : an exploration of organizational models and theoretical potential." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:23cc81b9-b901-47e1-a200-a1625842cad1.

Full text
Abstract:
The aims of this mixed methods research study, which applies a case study research strategy to six U.S. community schools, are to understand three community school models (national, local federation, and independent) and their potential to improve conditions for students, their parents, and other community members. The primary research questions being addressed are (1) what distinguishing characteristics does each of the three community school models possess in the areas of aims and motivations, implementation strategies, and impacts, and (2) does the community school concept have the potential to strengthen individual settings, the links between such settings, and broad features that are located in the 'human ecological systems' of students (Bronfenbrenner, 1977, 1979, 1986)? Collectively, practitioner-based and academic works, which have investigated community schools in the United States, England, Scotland, Australia, the Netherlands, and Northern Ireland., conclude that community schools provide individuals with access to a variety of offerings, that the operations of community schools are complex, thus requiring the establishment of a community school director position, effective partnerships, and healthy communication, and that community schools typically achieve mixed academic performance results and consistently possess positive 'academic-related' and 'non-academic-related' elements, such as high levels of student interest in school and high levels of family cohesion. To develop a greater understanding of community school operations and academic performance, my exploratory, descriptive research, which is closest to that of the former topic but touches upon the latter, takes on an interpretivist perspective that seeks to illuminate and understand three different community school models (the national, local federation, and independent models) and their environments. In terms of research question one, the data suggests that distinguishing characteristics, which can be described as 'inherent' and 'potential,' exist among the three community school models. For example, the national and local federation models possess the 'inherent' characteristic that individual schools have access to a headquarters, foundation, or a central office. Additionally, in terms of 'potential' characteristics, the data suggests that the national model displays the greatest strength amongst the three models in the areas of implementation strategies and achieved academic performance results; whereas, the independent model displays the greatest strength amongst the three models in the areas of aims and motivations and being perceived as having had strong positive impacts on academic performance, 'academic-related' elements, and 'non-academic-related' elements. It should be noted, though, that all three models were consistently viewed as possessing a variety of favorable 'academic-related' and 'nonacademic- related' elements. The findings, overall, provide new evidence on the role of a model. Essentially, the nature of the model adopted has consequences. As for research question two, the data drawn from across the six cases suggests that the community school concept has the potential to strengthen individual settings, such as schools and student peer groups, the links between such settings, and broad features, such as the state of poverty and the political culture, that reside in the 'human ecological systems' of students. I hope this study provides policymakers and practitioners with relevant evidence that helps them decide between the 'traditional' public school approach and the community school approach and amongst the three community school models. Additionally, this study seeks to provide a basis for future research on community school models and the theoretical potential of the community school concept that may help improve the lives and settings of disadvantaged students and other community members.
APA, Harvard, Vancouver, ISO, and other styles
25

Grimes, David B. "Learning by imitation and exploration : Bayesian models and applications in humanoid robotics /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tuci, Elio. "An exploration on the evolution of learning behaviour using robot-based models." Thesis, University of Sussex, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288147.

Full text
Abstract:
The work described in this thesis concerns the study of the evolution of simple forms of learning behaviour in artificial agents. Our interest in the phylogeny of learning has been developed within the theoretical framework provided by the "ecological approach" to the study of learning. The latter is a recent theoretical and methodological perspective which, contrary to that suggested by the classical approaches in animal and comparative psychology, has reconsidered the importance of the evolutionary analysis of learning as a species- niche-specific adaptive process, which should be investigated by employing the conceptual apparatus originally developed by J. J. Gibson within the context of visual perception. However, it has been acknowledged in the literature that methodological difficulties are hindering the evolutionary ecological study of learning. We argue that methodological tools - i. e., artificial agent based models - recently developed within the context of biologically-oriented cognitive science can potentially represent a complementary methodology to investigate issues concerning the evolutionary history of learning without losing sight of the complexity of the ecological perspective. Thus, the experimental work presented in this thesis contributes to the discussion on the adaptive significance of learning, through the analysis of the evolution of simple forms of associative learning in artificial agents. Part of the work of the thesis focuses on the study of the nature of the selection pressures which facilitate the evolution of associative learning. The results of these simulations suggest that ecological factors might prevent the selection from operating in favour of those elements of the "learning machinery" which, given the varying nature of the environment, are of potential benefit for the agents. Other simulations highlight the properties of the agent control structure and the characteristics of particular features of the ecology of the learning scenario which facilitate the evolution of learning agents
APA, Harvard, Vancouver, ISO, and other styles
27

Klonaris, Stathis. "Applied demand analysis for food in Greece : exploration of alternative AIDS models." Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Du, Xiao An. "Confucian-Christian dialogue in China : an exploration in historical and contemporary models." Thesis, University of Birmingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633085.

Full text
Abstract:
This research is a part of interreligious dialogue movement in the contemporary global context. It is a result of Sino-Western dialogue in religious dimension to break down the barriers between Confucianism and Christianity. The strategy of the thesis is to explore the basic models of Confucian-Christian dialogue in the historical and contemporary China and try to seek common ground to share their values in the context of globalization. The main contribution of this thesis is that it has tested the typology of three models Exclusivism, Inclusivism and Pluralism given by Alan Race. It used the typology of three models as a method to deal with the studies of the history of Confucian-Christian encounter. Through this research, I try to prove that this typology of three models is still the best way to improve interreligious dialogue despite its weaknesses. According to this pluralistic model of liberal theology, I clearly find the possibility to achieve the ideal of harmony without sameness between Christianity and Confucianism.
APA, Harvard, Vancouver, ISO, and other styles
29

Umar, Mariam. "Energy and Performance Models Enabling Design Space Exploration using Domain Specific Languages." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/95563.

Full text
Abstract:
With the advent of exascale architectures maximizing performance while maintaining energy consumption within reasonable limits has become one of the most critical design constraints. This constraint is particularly significant in light of the power budget of 20 MWatts set by the U.S. Department of Energy for exascale supercomputing facilities. Therefore, understanding an application's characteristics, execution pattern, energy footprint, and the interactions of such aspects is critical to improving the application's performance as well as its utilization of the underlying resources. With conventional methods of analyzing performance and energy consumption trends scientists are forced to limit themselves to a manageable number of design parameters. While these modeling techniques have catered to the needs of current high-performance computing systems, the complexity and scale of exascale systems demands that large-scale design-space-exploration techniques are developed to enable comprehensive analysis and evaluations. In this dissertation we present research on performance and energy modeling of current high performance computing and future exascale systems. Our thesis is focused on the design space exploration of current and future architectures, in terms of their reconfigurability, application's sensitivity to hardware characteristics (e.g., system clock, memory bandwidth), application's execution patterns, application's communication behavior, and utilization of resources. Our research is aimed at understanding the methods by which we may maximize performance of exascale systems, minimize energy consumption, and understand the trade offs between the two. We use analytical, statistical, and machine-learning approaches to develop accurate, portable and scalable performance and energy models. We develop application and machine abstractions using Aspen (a domain specific language) to implement and evaluate our modeling techniques. As part of our research we develop and evaluate system-level performance and energy-consumption models that form part of an automated modeling framework, which analyzes application signatures to evaluate sensitivity of reconfigurable hardware components for candidate exascale proxy applications. We also develop statistical and machine-learning based models of the application's execution patterns on heterogeneous platforms. We also propose a communication and computation modeling and mapping framework for exascale proxy architectures and evaluate the framework for an exascale proxy application. These models serve as external and internal extensions to Aspen, which enable proxy exascale architecture implementations and thus facilitate design space exploration of exascale systems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Stone, Devon. "An exploration of alternative features in micro-finance loan default prediction models." Master's thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32377.

Full text
Abstract:
Despite recent developments financial inclusion remains a large issue for the World's unbanked population. Financial institutions - both larger corporations and micro-finance companies - have begun to provide solutions for financial inclusion. The solutions are delivered using a combination of machine learning and alternative data. This minor dissertation focuses on investigating whether alternative features generated from Short Messaging Service (SMS) data and Android application data contained on borrowers' devices can be used to improve the performance of loan default prediction models. The improvement gained by using alternative features is measured by comparing loan default prediction models trained using only traditional credit scoring data to models developed using a combination of traditional and alternative features. Furthermore, the paper investigates which of 4 machine learning techniques is best suited for loan default prediction. The 4 techniques investigated are logistic regression, random forests, extreme gradient boosting, and neural networks. Finally the paper identifies whether or not accurate loan default prediction models can be trained using only the alternative features developed throughout this minor dissertation. The results of the research show that alternative features improve the performance of loan default prediction across 5 performance indicators, namely overall prediction accuracy, repaid prediction accuracy, default prediction accuracy, F1 score, and AUC. Furthermore, extreme gradient boosting is identified as the most appropriate technique for loan default prediction. Finally, the research identifies that models trained using the alternative features developed throughout this project can accurately predict loan that have been repaid, the models do not accurately predict loans that have not been repaid.
APA, Harvard, Vancouver, ISO, and other styles
31

Kennedy, Eric. "The Evolution of Brand Co-Creation: Models and Exploration of Stakeholders' Motivations." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011858/.

Full text
Abstract:
Co-creation is an emerging phenomenon that occurs when two or more parties work together to create value. Co-creation, which is a key component to service dominant logic, is present in business to business, business to consumer, and consumer to consumer processes. This dissertation will focus on the business to consumer (and consumer to business) co-creation relationship. Much of the current business to consumer co-creation literature is qualitative in nature, with quantitative work just now beginning to emerge. As such, there is still much about the phenomenon of co-creation that is not understood. When looking at co-creation in the context of brand management, even less is known. In today's age of digital interaction where consumers are gaining more power on a daily basis, practitioners and academics should understand the motivations for consumers to engage brands in co-creation and what the outcomes of these co-creation partnerships are. Because of this, the dissertation contains three essays with the purpose of (1) identifying the motivations for co-creation from consumer and brand perspectives, (2) exploring each of these motivators on their individual relationship to the outcome of co-creation, and (3) understanding how the perceived ability to influence a brand impacts the outcomes of co-creation. Essay 1, titled "Co-creation of brand identities: consumer and industry influence and motivations," aims to develop an understanding of the phenomena of co-creation and how the practice is used in shaping brand identities. Two studies are undertaken to provide insight into co-creation. First, a qualitative study is used to gain insight from key decision makers with responsibility for a brand. Second, a study of millennial consumers is used to develop the antecedents of consumer motivations of co-creation of brand identities. This essay then presents a comprehensive framework that encompasses two models (industry and consumer) of brand identity co-creation. Much of the current literature on co-creation is conceptual or qualitative, and these results provide the analytical support for the building blocks of co-creation theory development. Essay 2, titled "An examination of the factors leading to consumer co-creation of brand," further explores the consumer model of co-creation proposed in Essay 1. Through a series of five studies, the factors of social, fun, brand compatibility, brand commitment, and communication appeal are analyzed individually to determine how each factor impacts the consumers' willingness to engage in co-creation. The results of this study expand the academic knowledge of co-creation, by providing information about why consumers engage with brands in co-creation. Additionally, practitioners will benefit from the descriptive results which provide insight into which motivations a brand should manipulate if it wishes to engage consumers in co-creation. Essay 3, titled "When perceived ability to influence plays a role: brand co-creation in web 2.0," examines how co-creation is impacted by consumers' attributions about a brand's ability to be influenced. Through two studies, focusing on millennial consumers, this essay seeks to understand the attributions that consumers make about brands, what kind of attributions are made, and what the outcome of these attributions are – in terms of co-creation and perceived influence. This essay enhances the current knowledge on the co-creation phenomena and provides insight into the importance of a brand being perceived as being able to be influenced, which will lead to co-creation and increased purchase intentions. In sum, the three essays contained in this dissertation specify a framework for the antecedents of co-creation, an in-depth analysis of those antecedents, and an examination of how perceived influence impacts co-creation. The resulting body of work provides academics and practitioners with a base to better understand the process of co-creation.
APA, Harvard, Vancouver, ISO, and other styles
32

Rollo, Anthony L. "An Exploration of parametric software cost estimating models for Jackson Systems Development." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10673/.

Full text
Abstract:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method. The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.
APA, Harvard, Vancouver, ISO, and other styles
33

Kiambi, Dane Mwirigi. "PUBLIC RELATIONS IN KENYA: AN EXPLORATION OF PUBLIC RELATIONS MODELS AND CULTURAL INFLUENCES." Miami University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=miami1282847327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Schumacher, Linus J. "A mathematical exploration of principles of collective cell migration and self-organisation." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:bba68d2c-352b-4310-89c2-b9049b70515c.

Full text
Abstract:
This thesis explores the role of collective cell migration and self-organisation in the development of the embryo and in vitro tissue formation through mathematical and computational approaches. We consider how population heterogeneity, microenvironmental signals and cell-cell interactions facilitate cells to collectively organise and navigate, with the aim to work towards uncovering general rules and principles, rather than delving into the microscopic molecular details. To ensure the biological relevance of our results, we collaborate closely with experimental biologists working on two model systems. First, to understand how neural crest cells obtain directionality, maintain persistence and specialise during their migration, we use computational simulations in parallel with imaging of chick embryos under genetic and surgical perturbations. We show how only a few cells adopting a leader state that enables them to read out chemical signals can lead a population of cells in a follower state over long distances in the embryo. Furthermore, we devise and test an improved mechanism of how cells dynamically switch between leader and follower states in the presence of a chemoattractant gradient. Our computational work guides the choice of new experiments, aids in their interpretation and probes hypotheses in ways the experiments can not. Secondly, to study the self-organisation of mouse skin cells in vitro, we draw on aggregation processes and scaling theory. Dermal and epidermal cells, after being dissociated and mixed, can reconstitute functional (transplantable and hair-growing) skin in culture. Using kinetic aggregation models and scaling analysis we show that the initial clustering of epidermal cells can be described by Smoluchowski coagulation, consistent with the dynamics of the "clustering clusters" universality class. Then, we investigate a potential mechanism for the size-regulation of cell aggregates during the later stages of the skin reconstitution process. Our analysis shows the extent to which this tissue formation follows a single physical process and when the transition to different dynamics occurs, which may be triggered by cellular biochemical changes.
APA, Harvard, Vancouver, ISO, and other styles
35

Siah, Poh-Chua. "An exploration of the effects of cognitive thinking and affect in attitude judgments." Thesis, University of Exeter, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thelin, Christopher Murray. "Application and Evaluation of Full-Field Surrogate Models in Engineering Design Space Exploration." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/8625.

Full text
Abstract:
When designing an engineering part, better decisions are made by exploring the entire space of design variations. This design space exploration (DSE) may be accomplished manually or via optimization. In engineering, evaluating a design during DSE often consists of running expensive simulations, such as finite element analysis (FEA) in order to understand the structural response to design changes. The computational cost of these simulations can make thorough DSE infeasible, and only a relatively small subset of the designs are explored. Surrogate models have been used to make cheap predictions of certain simulation results. Commonly, these models only predict single values (SV) that are meant to represent an entire part's response, such as a maximum stress or average displacement. However, these single values cannot return a complete prediction of the detailed nodal results of these simulations. Recently, surrogate models have been developed that can predict the full field (FF) of nodal responses. These FF surrogate models have the potential to make thorough and detailed DSE much more feasible and introduce further design benefits. However, these FF surrogate models have not yet been applied to real engineering activities or been demonstrated in DSE contexts, nor have they been directly compared with SV surrogate models in terms of accuracy and benefits.This thesis seeks to build confidence in FF surrogate models for engineering work by applying FF surrogate models to real DSE and engineering activities and exploring their comparative benefits with SV surrogate models. A user experiment which explores the effects of FF surrogate models in simple DSE activities helps to validate previous claims that FF surrogate models can enable interactive DSE. FF surrogate models are used to create Goodman diagrams for fatigue analysis, and found to be more accurate than SV surrogate models in predicting fatigue risk. Mode shapes are predicted and the accuracy of mode comparison predictions are found to require a larger amount of training samples when the data is highly nonlinear than do SV surrogate models. Finally, FF surrogate models enable spatially-defined objectives and constraints in optimization routines that efficiently search a design space and improve designs.The studies in this work present many unique FF-enabled design benefits for real engineering work. These include predicting a complete (rather than a summary) response, enabling interactive DSE of complex simulations, new three-dimensional visualizations of analysis results, and increased accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Mobley, John Thomas Jr. "An exploration of models for estimating evapotranspiration from stream fluctuations in riparian corridors." Connect to online resource, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1442921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Refai, Nader Sudqy. "An exploration of Islamic studies curriculum models in Muslim secondary schools in England." Thesis, University of Derby, 2011. http://hdl.handle.net/10545/232612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hill, J. "Individual differences in adaptability to shiftwork : an exploration of models of shiftwork tolerance." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637278.

Full text
Abstract:
In order to refine models of shiftwork tolerance, the purported relationships between outcomes and modifiers of the adaptation process were explored. A series of empirical studies amongst shiftworkers, across a variety of work patterns and industries, examined the efficacy of demographic, circadian, personality and work-related variables as predictors of shiftwork tolerance. Trends were shown to be attenuated by shift type, industry type and the length of exposure to the shift system. Using a phenomenological approach, Study 1 conducted a series of semi-structured interviews, investigating the aetiology and management of effects through the eyes of shiftworkers themselves. Analysis of recurrent themes supported established trends in the literature and some fit with the models, highlighting both outcome, and to a lesser extent, modifier variables. New relationships were also identified. Study 2 used this information to design a questionnaire for the collection of more objective data from the same site. Outcomes were capable of being meaningfully reduced into major problem domains. The number and predictive validity of modifiers varied according to the outcome under investigation, with similarities emerging between outcomes that correlated strongly with one another. Using the same approach, Study 3 examined the effect of the type of shift worked. Extent of problems and patterns of prediction showed a strong shift-dependent effect, with reliable trends emerging between those groups involved in nightwork and those not. Study's 4 and 5 explored the effect of short-(5 weeks) and long-term (12 months) exposure. Despite predictive relationships being stronger at follow-up, they were inconsistent over time, suggesting that such interactions are an evolving process. Regardless of the type of shift, industry, or length of exposure, attitudes towards shiftwork were most strongly predicted by work-related modifiers, health outcomes by circadian/personality modifiers, and sleep duration by demographic modifiers, suggesting that specific domains are differentially mediated.
APA, Harvard, Vancouver, ISO, and other styles
40

Müller, Andrea. "Humpback whales, rock lobsters and mathematics : exploration of assessment models incorporating stock-structure." Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11342.

Full text
Abstract:
This thesis presents four marine resource assessments; three concern the Southern Hemisphere (SH) humpback whale (Megaptera novaeangliae) and one the South African east coast rock lobster Palinurus delagoae. It also sets out the statistical background to the methodology employed in the assessments, including an outline of the Bayesian approach, Bayes' theorem, and the sampling-importance re-sampling (SIR) as well as the Markov Chain Monte Carlo (MCMC) methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Sacepe, Karine. "Espaces collaboratifs d'innovation corporate et exploration de nouveaux business models : le cas d’Altran." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX069.

Full text
Abstract:
Pour faire face à leurs enjeux concurrentiels, les entreprises existantes sont incitées à renouveler leur business model. Parallèlement, de plus en plus de lieux d’innovation se développent en leur sein. Dans cette thèse, nous étudions la manière dont ces lieux- que nous appelons les espaces collaboratifs d’innovation (ECI) corporate- contribuent à l’exploration de nouveaux business models.Une étude de cas longitudinale à caractère ethnographique au sein de la société Altran explore la place des ECI dans l’organisation, leurs activités réelles autour des clients ainsi que l’usage des artefacts/ équipements mis à disposition des utilisateurs.Les résultats montrent la dimension d’innovation managériale - fruit de l’ambidextrie organisationnelle- de ces ECI, décryptent les microprocessus conduisant à l’émergence d’une nouvelle proposition de valeur / relation client et situent le rôle des démonstrateurs dans la construction d’un discours proposant une nouvelles expérience client aux visiteurs.Nous nourrissons ainsi la compréhension du fonctionnement des ECI et contribuons à enrichir les théories sur l’innovation en Business model, notamment dans sa perspective multidimensionnelle et systémique.Nous concluons en proposant des pistes pour optimiser l’utilisation des ECI à travers une meilleure appropriation par les ressources internes, notamment les forces de vente
To face their competitive stakes, existing companies are incentivized to renew their business model. Meanwhile, more and more innovation venues are being developed within them. In this thesis, we study how this places- which we call corporate collaborative innovation spaces (CIS)- contribute to the exploration of new business models.A lengthwise ethnographic case study within Altran company explores CIS’s place in the organisation, their real activities around customers and the use of artifacts/equipments made available to users.The results show the managerial innovation dimension- result of the organisational ambidextrous nature-of these CIS, decipher the micro processes leading to the emergence of a new value proposition / customer relationship and situate the demonstrators’ role in the construction of a discourse offering a new customer experience to visitors.We thus foster an understanding of CIS’s functioning and contribute to the enrichment of Business Model’s innovation theories, particularly in their multidimensional and systemic perspectives. We conclude by proposing ways to optimize the use of CIS through better appropriation by internal resources, particularly the sales force
APA, Harvard, Vancouver, ISO, and other styles
42

Glavin, Stephen John Carleton University Dissertation Geography. "Creating sound symbols from digital terrain models; an exploration of cartographic communication forms." Ottawa, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Song. "The Exploration of the Relationship Between Guessing and Latent Ability in IRT Models." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/423.

Full text
Abstract:
This study explored the relationship between successful guessing and latent ability in IRT models. A new IRT model was developed with a guessing function integrating probability of guessing an item correctly with the examinee's ability and the item parameters. The conventional 3PL IRT model was compared with the new 2PL-Guessing model on parameter estimation using the Monte Carlo method. SAS program was used to implement the data simulation and the maximum likelihood estimation. Compared with the traditional 3PL model, the new model should reflect: a) the maximum probability of guessing should not be more than 0.5, even for the highest ability examinees; b) different ability of examinees should have different probability of successful guessing, because a basic assumption for the new models is that higher ability examinees have a higher probability of successful guessing than lower ability examinees; c) smaller standard error in estimating parameters; and d) faster running time. The results illustrated that the new 2PL-Guessing model was superior to the 3PL model in all four aspects.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Wencen. "Bio-inspired cooperative exploration of noisy scalar fields." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48940.

Full text
Abstract:
A fundamental problem in mobile robotics is the exploration of unknown fields that might be inaccessible or hostile to humans. Exploration missions of great importance include geological survey, disaster prediction and recovery, and search and rescue. For missions in relatively large regions, mobile sensor networks (MSN) are ideal candidates. The basic idea of MSN is that mobile robots form a sensor network that collects information, meanwhile, the behaviors of the mobile robots adapt to changes in the environment. To design feasible motion patterns and control of MSN, we draw inspiration from biology, where animal groups demonstrate amazingly complex but adaptive collective behaviors to changing environments. The main contributions of this thesis include platform independent mathematical models for the coupled motion-sensing dynamics of MSN and biologically-inspired provably convergent cooperative control and filtering algorithms for MSN exploring unknown scalar fields in both 2D and 3D spaces. We introduce a novel model of behaviors of mobile agents that leads to fundamental theoretical results for evaluating the feasibility and difficulty of exploring a field using MSN. Under this framework, we propose and implement source seeking algorithms using MSN inspired by behaviors of fish schools. To balance the cost and performance in exploration tasks, a switching strategy, which allows the mobile sensing agents to switch between individual and cooperative exploration, is developed. Compared to fixed strategies, the switching strategy brings in more flexibility in engineering design. To reveal the geometry of 3D spaces, we propose a control and sensing co-design for MSN to detect and track a line of curvature on a desired level surface.
APA, Harvard, Vancouver, ISO, and other styles
45

Lindmark, Daniel. "Simulation based exploration of a loading strategy for a LHD-vehicle." Thesis, Umeå universitet, Institutionen för fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-126419.

Full text
Abstract:
Optimizing the loading process of a front loader vehicle is a challenging task. The design space is large and depends on the design of the vehicle, the strategy of the loading process, the nature of the material to load etcetera. Finding an optimal loading strategy, with respect to production and damage on equipment would greatly improve the production and environmental impacts in mining and construction. In this thesis, a method for exploring the design space of a loading strategy is presented. The loading strategy depends on four design variables that controls the shape of the trajectory relative to the shape of the pile. The responses investigated is the production, vehicle damage and work interruptions due to rock spill. Using multi-body dynamic simulations many different strategies can be tested with little cost. The result of these simulations are then used to build surrogate models of the original unknown function. The surrogate models are used to visualize and explore the design space and construct Pareto fronts for the competing responses. The surrogate models were able to predict the production function from the simulations well. The damage and rock spill surrogate models was moderately good in predicting the simulations but still good enough to explore how the design variables affect the response. The produced Pareto fronts makes it easy for the decision maker to compare sets of design variables and choose an optimal design for the loading strategy.
APA, Harvard, Vancouver, ISO, and other styles
46

He, Yanzhang. "Segmental Models with an Exploration of Acoustic and Lexical Grouping in Automatic Speech Recognition." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429881253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Perkins, Mary J. "Models of partnership working : an exploration of English NHS and university research support offices." Thesis, University of Bath, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547640.

Full text
Abstract:
Clinical and applied health research is led by academics and often conducted in the National Health Service (NHS). Researchers work with Research Support Offices in both Universities and the NHS. The 2006 government health research strategy, Best Research for Best Health heralded dramatic changes for both the funding of, and support for, clinical and applied health research in England with the creation of new, quality driven, competitive funding streams and a new infrastructure to support research and researchers. One of the results of these changes was to drive NHS and University Research Support Offices closer together, with some institutions forming close partnerships, including joint offices to deliver support for clinical and applied health research. Little is known about the models of partnership working between the universities and the NHS and the factors that drove the decisions to create partnership Research Support Offices. Therefore it is important to map current arrangements and describe the factors that contribute to those arrangements. Firstly a survey of University Research Support Offices based in universities with a medical school was undertaken to provide a snapshot of the structures and functions of those Research Support Offices. Then semistructured interviews were undertaken with a sample of staff working in joint NHS/University and separate NHS and University Research Support Offices to gain a deeper understanding of why the Research Support Offices were structured and functioned in the ways that they did. The main findings from this work were: there are no common structures, functions, or systems and few common processes in place to support clinical and applied health researchers across England; advice and help for navigating the complex regulatory environment currently underpinning clinical and applied health research in England is fragmented; three models of working between NHS and university Research Support Offices were identified; joint offices, collaborative offices and separate offices. The drivers for joint working between NHS and University Research Support Offices are compelling. However, the barriers to working closely can be immense if not carefully considered. Those contemplating working in partnership need to ensure that they understand what the partnership aims to deliver and all partners need to commit to a shared vision. In addition, practical issues such as the systems to be used, the physical location of staff and employment issues need to be addressed in advance before meaningful joint working can occur.
APA, Harvard, Vancouver, ISO, and other styles
48

Panzner, Maximilian [Verfasser]. "Learning action models by curiosity driven self exploration and language guided generalization / Maximilian Panzner." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1211474844/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Vollet, Justin William. "Capturing Peers', Teachers', and Parents' Joint Contributions to Students' Engagement: an Exploration of Models." PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/3774.

Full text
Abstract:
Building on research that has focused on understanding how peers contribute to students' engagement, this dissertation explores the extent to which peer group influences on students' engagement may add to and be contextualized by qualities of the relationships they maintain with their teachers and their parents. To focus on how each of these adult contexts work in concert with peer groups to jointly contribute to changes in students' engagement, the two studies used data on 366 sixth graders which were collected at two time points during their first year of middle school: Peer groups were identified using socio-cognitive mapping; students reported on teacher and parent involvement; and teachers reported on each students' engagement. In both studies, models of cumulative and contextualized joint effects were examined. Consistent with models of cumulative effects, peer group engagement, parent involvement, and teacher involvement each uniquely predicted changes in students' engagement. Consistent with contextualized models suggesting differential susceptibility, peer group engagement was a more pronounced predictor of changes in engagement for students who experienced relatively low involvement from teachers. Similarly, peer group influences on changes in students' engagement were stronger for students who experienced relatively low involvement from their parents. In both cases, these peer effects were positive or negative depending on the engagement versus disaffection of each student's peer group. Both studies also used person-centered analyses to reveal cumulative and contextualized effects. Most engaged were students who experienced support from either both teachers and peers, or both parents and peers; the lowest levels of engagement were found among those students who affiliated with disaffected peers who also experienced either their teachers or parents as relatively uninvolved. Both high teacher and high parent involvement partially protected students from the motivational costs of affiliating with disaffected peers. Similarly, belonging to engaged peer groups partially buffered students' engagement from the ill effects of low teacher and parent involvement. These findings suggest that, although peer groups and teachers and parents are each important individually, a complete understanding of their contributions to students' engagement requires the examination of their joint effects.
APA, Harvard, Vancouver, ISO, and other styles
50

Lartigue, Thomas. "Mixtures of Gaussian Graphical Models with Constraints Gaussian Graphical Model exploration and selection in high dimension low sample size setting." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX034.

Full text
Abstract:
La description des co-variations entre plusieurs variables aléatoires observées est un problème délicat. Les réseaux de dépendance sont des outils populaires qui décrivent les relations entre les variables par la présence ou l’absence d’arêtes entre les nœuds d’un graphe. En particulier, les graphes de corrélations conditionnelles sont utilisés pour représenter les corrélations “directes” entre les nœuds du graphe. Ils sont souvent étudiés sous l’hypothèse gaussienne et sont donc appelés “modèles graphiques gaussiens” (GGM). Un seul réseau peut être utilisé pour représenter les tendances globales identifiées dans un échantillon de données. Toutefois, lorsque les données observées sont échantillonnées à partir d’une population hétérogène, il existe alors différentes sous-populations qui doivent toutes être décrites par leurs propres graphes. De plus, si les labels des sous populations (ou “classes”) ne sont pas disponibles, des approches non supervisées doivent être mises en œuvre afin d’identifier correctement les classes et de décrire chacune d’entre elles avec son propre graphe. Dans ce travail, nous abordons le problème relativement nouveau de l’estimation hiérarchique des GGM pour des populations hétérogènes non labellisées. Nous explorons plusieurs axes clés pour améliorer l’estimation des paramètres du modèle ainsi que l’identification non supervisee des sous-populations. ´ Notre objectif est de s’assurer que les graphes de corrélations conditionnelles inférés sont aussi pertinents et interprétables que possible. Premièrement - dans le cas d’une population simple et homogène - nous développons une méthode composite qui combine les forces des deux principaux paradigmes de l’état de l’art afin d’en corriger les faiblesses. Pour le cas hétérogène non labellisé, nous proposons d’estimer un mélange de GGM avec un algorithme espérance-maximisation (EM). Afin d’améliorer les solutions de cet algorithme EM, et d’éviter de tomber dans des extrema locaux sous-optimaux quand les données sont en grande dimension, nous introduisons une version tempérée de cet algorithme EM, que nous étudions théoriquement et empiriquement. Enfin, nous améliorons le clustering de l’EM en prenant en compte l’effet que des cofacteurs externes peuvent avoir sur la position des données observées dans leur espace
Describing the co-variations between several observed random variables is a delicate problem. Dependency networks are popular tools that depict the relations between variables through the presence or absence of edges between the nodes of a graph. In particular, conditional correlation graphs are used to represent the “direct” correlations between nodes of the graph. They are often studied under the Gaussian assumption and consequently referred to as “Gaussian Graphical Models” (GGM). A single network can be used to represent the overall tendencies identified within a data sample. However, when the observed data is sampled from a heterogeneous population, then there exist different sub-populations that all need to be described through their own graphs. What is more, if the sub-population (or “class”) labels are not available, unsupervised approaches must be implemented in order to correctly identify the classes and describe each of them with its own graph. In this work, we tackle the fairly new problem of Hierarchical GGM estimation for unlabelled heterogeneous populations. We explore several key axes to improve the estimation of the model parameters as well as the unsupervised identification of the sub-populations. Our goal is to ensure that the inferred conditional correlation graphs are as relevant and interpretable as possible. First - in the simple, homogeneous population case - we develop a composite method that combines the strengths of the two main state of the art paradigms to correct their weaknesses. For the unlabelled heterogeneous case, we propose to estimate a Mixture of GGM with an Expectation Maximisation (EM) algorithm. In order to improve the solutions of this EM algorithm, and avoid falling for sub-optimal local extrema in high dimension, we introduce a tempered version of this EM algorithm, that we study theoretically and empirically. Finally, we improve the clustering of the EM by taking into consideration the effect of external co-features on the position in space of the observed data
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography