Dissertations / Theses on the topic 'Paramedic learning'

To see the other types of publications on this topic, follow the link: Paramedic learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Paramedic learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Taylor, Natasha. "Fear, performance and power : a study of simulation learning in paramedic education." Thesis, University of East Anglia, 2012. https://ueaeprints.uea.ac.uk/42405/.

Full text
Abstract:
Simulation or scenario learning is an integral part of student paramedic development and, despite the increasing amount of paramedic research, very little is known about how students and tutors experience it. Current literature regards simulation as invaluable without exploring why this may be the case and this study aims to address this. This is a compressed time mode ethnographic approach study that incorporates data from student paramedics during and immediately after simulation learning events and tutor views of facilitating the simulation experience. This, along with a comprehensive literature review, provides an overview of simulation in the student paramedic development pathway. This thesis exposes how student paramedics find the simulation process anxiety provoking and explores the many reasons for this. The performance aspect of scenarios is echoed in the dramaturgical language used when talking about simulation learning events and the similarities between simulation learning events and simulation assessment events merely adds to this stress. Using the lens of critical pedagogy, issues of power (control and hierarchy) within the educational and organisational structures are examined and offered as another possible explanation for the high levels of anxiety in simulation learning. The thesis ends with the question of whether simulation learning can be changed for the better and if so, how.
APA, Harvard, Vancouver, ISO, and other styles
2

Hobbs, Lisa Rose. "Australasian paramedic attitudes and perceptions about continuing professional development." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134081/1/Lisa%20Rose%20Hobbs%20Thesis_Redacted.pdf.

Full text
Abstract:
This study utilised constructivist grounded theory to explore the attitudes, engagement and perception of current Australasian paramedics in relation to CPD. The study found paramedics have not significantly modified their engagement in CPD/LLL despite professional registration. There is, however some confusion surrounding what constitutes CPD. Furthermore, education appears to be a new form of hierarchical stigmatisation within the paramedic culture. The study facilitated the creation of a framework of paramedic CPD, which includes CPD models; PDP; reflective practices; and LLL. The framework acknowledges professional, industrial, social, personal, political, organisational and economic factors which influence or change paramedic engagement in CPD.
APA, Harvard, Vancouver, ISO, and other styles
3

Villers, Lance Carlton. "Influences of situated cognition on tracheal intubation skill acquisition in paramedic education." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jones, Indra. "Reflective practice and the learning of health care students." Thesis, University of Hertfordshire, 2009. http://hdl.handle.net/2299/3471.

Full text
Abstract:
Reflective practice, though ill-defined, has become an accepted educational concept within many health care disciplines particularly in nursing. Subsequently it has become benchmarked within Paramedic Sciences as a professional requirement for continuing education and clinical practice. However, despite the vast literature in nursing and the increasing growth of reflective practice in paramedic curricula it is unclear how it influences the students’ learning in preparation for graduate practice as future reflective practitioners. This research explored ‘to what extent does reflective practice in the paramedic curriculum influence the students’ academic and clinical learning leading to graduate practice’? A mixed methods approach with cohort samples of undergraduate health care students comprised four studies including surveys and non-participant observations of clinical simulation that were conducted in a university learning environment. The results showed overall that Paramedic students believed that they understood reflective practice and perceived it to be useful for their academic studies and clinical practice; although this is probably influenced more by formal teaching rather than the result of their own views. Students were able to describe reflective practice in ideal theoretical terms and were positive towards it regardless of their individual learning styles. However, in a clinical context, they applied it differently with significant emphasis on technical reflection. Evidence of the nature of reflective practice as it occurred during and after clinical simulation scenarios highlights a need for revised approaches to existing learning/teaching strategies with paramedic students. An extended understanding and refinement of reflective practice concepts including a new pedagogic framework to promote enhanced reflectivity are proposed. This theoretical framework is designed to accommodate reflective learning for both personal and collaborative learning related to curriculum outcomes. The use of clinical simulation for the development of reflective practice in the paramedic curriculum is supported with recommendations for further studies in academic and clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
5

Liebenberg, Nuraan. "A critical analysis of pre-hospital clinical mentorship to enable learning in emergency medical care." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2737.

Full text
Abstract:
Thesis (Master of Emergency Medical Care)--Cape Peninsula University of Technology, 2018.
For emergency medical care (EMC), clinical mentorship can be thought of as the relationship between the EMC students and qualified emergency care personnel. Through this relationship, students may be guided, supported and provided with information to develop knowledge, skills, and professional attributes needed for delivering quality clinical emergency care. However, this relationship is poorly understood and the focus of this research was to explore how this relationship enabled or constrained learning. Through having experienced mentorship, first as a student in EMC, then as an operational paramedic, mentoring students, I was privy to an insider perspective of clinical mentorship, and the experiences of fellow students‘. Through this experience the practices I observed may not have promoted learning. This is when my interest in pre-hospital clinical mentorship in relation to learning began. The aim of this research was to present a qualitative analysis of the clinical mentorship relationship in pre-hospital EMC involving the qualified pre-hospital emergency care practitioner (ECP) and the EMC student. The objectives included gaining an understanding of what enabled and/or constrained learning EMC, exploring clinical mentorship and learning in the pre-hospital EMC context, and gaining understanding of the role and scope of community members in the clinical mentorship activity system. The purpose of this study was to qualitatively document, by means of a thematic analysis, the pre-hospital clinical mentorship relationship, as well as document, by means of a Cultural Historical Activity Theory (CHAT) analysis, the clinical mentorship activity system. The focus of this qualitative documentation was the enablements and constraints to learning during clinical mentorship. This research also made possible recommendations for EMC clinical mentorship and education and may also inform (PBEC) policy, as well as work integrated learning (WIL) policy. Data collection included the use of diaries and focus group interviews. Analysis involved a two-part analysis, where data was reduced and understood with thematic analysis guided by Braun and Clarke (2006) six phase thematic analysis process (explained in Chapter three, Section 3.6). Thereafter, a CHAT analysis was conducted to uncover contradictions within the clinical mentorship activity system that made working on the object of activity difficult, thereby also uncovering constraints to learning. Inductive reasoning was applied to the thematic analysis to reduce data and identify themes and subthemes which provided insight into the enablements and constraints to learning in the pre-hospital EMC clinical mentorship relationship. The CHAT analysis of the data collected and analysed brought to surface the affordances, tensions as well as the primary-level and secondary-level contradictions of the clinical mentorship activity system. The thematic analysis of the clinical mentorship relationship provided limited understanding of the enablements and constraints to learning, and thus further motivated deeper analysis with CHAT. The results of this research included primary and secondary-level contradictions for almost all elements of the clinical mentorship activity system. Contradictions amongst the Division of Labour (DoL), the rules of the activity system, and the tools/resources of the activity system existed in that it constrained the interaction and activity of the subject and the community while working on the object of the activity system possibly achieving a lesser or undesired outcome of clinical mentorship.
APA, Harvard, Vancouver, ISO, and other styles
6

Lau, Man-kin, and 劉文建. "Learning by example for parametric font design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B41897183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zewdie, Dawit (Dawit Habtamu). "Representation discovery in non-parametric reinforcement learning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91883.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-73).
Recent years have seen a surge of interest in non-parametric reinforcement learning. There are now practical non-parametric algorithms that use kernel regression to approximate value functions. The correctness guarantees of kernel regression require that the underlying value function be smooth. Most problems of interest do not satisfy this requirement in their native space, but can be represented in such a way that they do. In this thesis, we show that the ideal representation is one that maps points directly to their values. Existing representation discovery algorithms that have been used in parametric reinforcement learning settings do not, in general, produce such a representation. We go on to present Fit-Improving Iterative Representation Adjustment (FIIRA), a novel framework for function approximation and representation discovery, which interleaves steps of value estimation and representation adjustment to increase the expressive power of a given regression scheme. We then show that FIIRA creates representations that correlate highly with value, giving kernel regression the power to represent discontinuous functions. Finally, we extend kernel-based reinforcement learning to use FIIRA and show that this results in performance improvements on three benchmark problems: Mountain-Car, Acrobot, and PinBall.
by Dawit Zewdie.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Lau, Man-kin. "Learning by example for parametric font design." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B41897183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nikbakht, Silab Rasoul. "Unsupervised learning for parametric optimization in wireless networks." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/671246.

Full text
Abstract:
This thesis studies parametric optimization in cellular and cell-free networks, exploring data-based and expert-based paradigms. Power allocation and power control, which adjust the transmit power to meet different fairness criteria such as max-min or max-product, are crucial tasks in wireless communications that fall into the parametric optimization category. The state-of-the-art approaches for power control and power allocation often demand huge computational costs and are not suitable for real-time applications. To address this issue, we develop a general-purpose unsupervised-learning approach for solving parametric optimizations; and extend the well-known fractional power control algorithm. In the data-based paradigm, we create an unsupervised learning framework that defines a custom neural network (NN), incorporating expert knowledge to the NN loss function to solve the power control and power allocation problems. In this approach, a feedforward NN is trained by repeatedly sampling the parameter space, but, rather than solving the associated optimization problem completely, a single step is taken along the gradient of the objective function. The resulting method is applicable for both convex and non-convex optimization problems. It offers two-to-three orders of magnitude speedup in the power control and power allocation problems compared to a convex solver—whenever appliable. In the expert-driven paradigm, we investigate the extension of fractional power control to cell-free networks. The resulting closed-form solution can be evaluated for uplink and downlink effortlessly and reaches an (almost) optimum solution in the uplink case. In both paradigms, we place a particular focus on large scale gains—the amount of attenuation experienced by the local-average received power. The slow-varying nature of the large-scale gains relaxes the need for a frequent update of the solutions in both the data-driven and expert-driven paradigms, enabling real-time application for both methods.
Aqueta tesis estudia l’optimització paramètrica a les xarxes cel.lulars i xarxes cell-free, explotant els paradigmes basats en dades i basats en experts. L’assignació i control de la potencia, que ajusten la potencia de transmissió per complir amb diferents criteris d’equitat com max-min o max-product, son tasques crucials en les telecomunicacions inalàmbriques pertanyents a la categoria d’optimització paramètrica. Les tècniques d’última generació per al control i assignació de la potència solen exigir enormes costos computacionals i no son adequats per aplicacions en temps real. Per abordar aquesta qüestió, desenvolupem una tècnica de propòsit general utilitzant aprenentatge no supervisat per resoldre optimitzacions paramètriques; i al mateix temps ampliem el reconegut algoritme de control de potencia fraccionada. En el paradigma basat en dades, creem un marc d’aprenentatge no supervisat que defineix una xarxa neuronal (NN, sigles de Neural Network en Anglès) especifica, incorporant coneixements experts a la funció de cost de la NN per resoldre els problemes de control i assignació de potència. Dins d’aquest enfocament, s’entrena una NN de tipus feedforward mitjançant el mostreig repetit en l’espai de paràmetres, però, en lloc de resoldre completament el problema d’optimització associat, es pren un sol pas en la direcció del gradient de la funció objectiu. El mètode resultant ´es aplicable tant als problemes d’optimització convexos com no convexos. Això ofereix una acceleració de dos a tres ordres de magnitud en els problemes de control i assignació de potencia en comparació amb un algoritme de resolució convexa—sempre que sigui aplicable. En el paradigma dirigit per experts, investiguem l’extensió del control de potencia fraccionada a les xarxes sense cèl·lules. La solució tancada resultant pot ser avaluada per a l’enllaç de pujada i el de baixada sense esforç i assoleix una solució (gaire) òptima en el cas de l’enllaç de pujada. En ambdós paradigmes, ens centrem especialment en els guanys a gran escala—la quantitat d’atenuació que experimenta la potencia mitja local rebuda. La naturalesa de variació lenta dels guanys a gran escala relaxa la necessitat d’una actualització freqüent de les solucions tant en el paradigma basat en dades com en el basat en experts, permetent d’aquesta manera l’ús dels dos mètodes en aplicacions en temps real.
Esta tesis estudia la optimización paramétrica en las redes celulares y redes cell-free, explorando los paradigmas basados en datos y en expertos. La asignación y el control de la potencia, que ajustan la potencia de transmisión para cumplir con diferentes criterios de equidad como max-min o max-product, son tareas cruciales en las comunicaciones inalámbricas pertenecientes a la categoría de optimización paramétrica. Los enfoques más modernos de control y asignación de la potencia suelen exigir enormes costes computacionales y no son adecuados para aplicaciones en tiempo real. Para abordar esta cuestión, desarrollamos un enfoque de aprendizaje no supervisado de propósito general que resuelve las optimizaciones paramétricas y a su vez ampliamos el reconocido algoritmo de control de potencia fraccionada. En el paradigma basado en datos, creamos un marco de aprendizaje no supervisado que define una red neuronal (NN, por sus siglas en inglés) específica, incorporando conocimiento de expertos a la función de coste de la NN para resolver los problemas de control y asignación de potencia. Dentro de este enfoque, se entrena una NN de tipo feedforward mediante el muestreo repetido del espacio de parámetros, pero, en lugar de resolver completamente el problema de optimización asociado, se toma un solo paso en la dirección del gradiente de la función objetivo. El método resultante es aplicable tanto a los problemas de optimización convexos como no convexos. Ofrece una aceleración de dos a tres órdenes de magnitud en los problemas de control y asignación de potencia, en comparación con un algoritmo de resolución convexo—siempre que sea aplicable. Dentro del paradigma dirigido por expertos, investigamos la extensión del control de potencia fraccionada a las redes cell-free. La solución de forma cerrada resultante puede ser evaluada para el enlace uplink y el downlink sin esfuerzo y alcanza una solución (casi) óptima en el caso del enlace uplink. En ambos paradigmas, nos centramos especialmente en las large-scale gains— la cantidad de atenuación que experimenta la potencia media local recibida. La naturaleza lenta y variable de las ganancias a gran escala relaja la necesidad de una actualización frecuente de las soluciones tanto en el paradigma basado en datos como en el basado en expertos, permitiendo el uso de ambos métodos en aplicaciones en tiempo real.
APA, Harvard, Vancouver, ISO, and other styles
10

Nasios, Nikolaos. "Bayesian learning for parametric and kernel density estimation." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Smirnov, Dmitriy S. M. Massachusetts Institute of Technology. "Deep learning-based methods for parametric shape prediction." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122770.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-76).
Many tasks in graphics and vision demand machinery for converting shapes into representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage. When the source data is noisy or ambiguous, however, artists and engineers often manually construct such representations, a tedious and potentially time-consuming process. While advances in deep learning have been successfully applied to noisy geometric data, the task of generating parametric shapes has so far been difficult for these methods. In this thesis, we consider the task of deep parametric shape prediction from two distinct angles. First, we propose a new framework for predicting parametric shape primitives using distance fields to transition between parameters like control points and input data on a raster grid. We demonstrate efficacy on 2D and 3D tasks, including font vectorization and surface abstraction. Second, we look at the problem of sketch-based modeling. Sketch-based modeling aims to model 3D geometry using a concise and easy to create but extremely ambiguous input: artist sketches. While most conventional sketch-based modeling systems target smooth shapes and put manually-designed priors on the 3D shapes, we present a system to infer a complete man-made 3D shape, composed of parametric surfaces, from a single bitmap sketch. In particular, we introduce our parametric representation as well as several specially designed loss functions. We also propose a data generation and augmentation pipeline for sketch. We demonstrate the efficacy of our system on a gallery of synthetic and real sketches as well as via comparison to related work.
"Supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374, the Toyota-CSAIL Joint Research Center, and the Skoltech-MIT Next Generation Program"
by Dmitriy Smirnov.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
12

Bratières, Sébastien. "Non-parametric Bayesian models for structured output prediction." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.

Full text
Abstract:
Structured output prediction is a machine learning tasks in which an input object is not just assigned a single class, as in classification, but multiple, interdependent labels. This means that the presence or value of a given label affects the other labels, for instance in text labelling problems, where output labels are applied to each word, and their interdependencies must be modelled. Non-parametric Bayesian (NPB) techniques are probabilistic modelling techniques which have the interesting property of allowing model capacity to grow, in a controllable way, with data complexity, while maintaining the advantages of Bayesian modelling. In this thesis, we develop NPB algorithms to solve structured output problems. We first study a map-reduce implementation of a stochastic inference method designed for the infinite hidden Markov model, applied to a computational linguistics task, part-of-speech tagging. We show that mainstream map-reduce frameworks do not easily support highly iterative algorithms. The main contribution of this thesis consists in a conceptually novel discriminative model, GPstruct. It is motivated by labelling tasks, and combines attractive properties of conditional random fields (CRF), structured support vector machines, and Gaussian process (GP) classifiers. In probabilistic terms, GPstruct combines a CRF likelihood with a GP prior on factors; it can also be described as a Bayesian kernelized CRF. To train this model, we develop a Markov chain Monte Carlo algorithm based on elliptical slice sampling and investigate its properties. We then validate it on real data experiments, and explore two topologies: sequence output with text labelling tasks, and grid output with semantic segmentation of images. The latter case poses scalability issues, which are addressed using likelihood approximations and an ensemble method which allows distributed inference and prediction. The experimental validation demonstrates: (a) the model is flexible and its constituent parts are modular and easy to engineer; (b) predictive performance and, most crucially, the probabilistic calibration of predictions are better than or equal to that of competitor models, and (c) model hyperparameters can be learnt from data.
APA, Harvard, Vancouver, ISO, and other styles
13

Larson, Barbara Keelor. "Informal workplace learning and partner relationships among paramedics in the prehospital setting /." Access Digital Full Text version, 1991. http://pocketknowledge.tc.columbia.edu/home.php/bybib/10258784.

Full text
Abstract:
Thesis (Ed.D.) -- Teachers College, Columbia University, 1991.
Typescript; issued also on microfilm. Sponsor: Victoria Marsick. Dissertation Committee: William Yakowitz. Includes bibliographical references: (leaves 205-223).
APA, Harvard, Vancouver, ISO, and other styles
14

Angola, Enrique. "Novelty Detection Of Machinery Using A Non-Parametric Machine Learning Approach." ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/923.

Full text
Abstract:
A novelty detection algorithm inspired by human audio pattern recognition is conceptualized and experimentally tested. This anomaly detection technique can be used to monitor the health of a machine or could also be coupled with a current state of the art system to enhance its fault detection capabilities. Time-domain data obtained from a microphone is processed by applying a short-time FFT, which returns time-frequency patterns. Such patterns are fed to a machine learning algorithm, which is designed to detect novel signals and identify windows in the frequency domain where such novelties occur. The algorithm presented in this paper uses one-dimensional kernel density estimation for different frequency bins. This process eliminates the need for data dimension reduction algorithms. The method of "pseudo-likelihood cross validation" is used to find an independent optimal kernel bandwidth for each frequency bin. Metrics such as the "Individual Node Relative Difference" and "Total Novelty Score" are presented in this work, and used to assess the degree of novelty of a new signal. Experimental datasets containing synthetic and real novelties are used to illustrate and test the novelty detection algorithm. Novelties are successfully detected in all experiments. The presented novelty detection technique could greatly enhance the performance of current state-of-the art condition monitoring systems, or could also be used as a stand-alone system.
APA, Harvard, Vancouver, ISO, and other styles
15

Porter, Robert Mceuen. "Application of Machine Learning and Parametric NURBS Geometry to Mode Shape Identification." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/5744.

Full text
Abstract:
In any design, the dynamic characteristics of a part are dependent on its geometric and material properties. Identifying vibrational mode shapes within an iterative design process becomes difficult and time consuming due to frequently changing part definition. Although research has been done to improve the process, visual inspection of analysis results is still the current means of identifying each vibrational mode determined by a modal analysis. This research investigates the automation of the mode shape identification process through the use of parametric geometry and machine learning.In the developed method, displacement results from finite element modal analysis are used to create parametric geometry which allows the matching of mode shapes without regards to changing part geometry or mesh coarseness. By automating the mode shape identification process with the use of parametric geometry and machine learning, the designer can gain a more complete view of the part's dynamic properties. It also allows for increased time savings over the current standard of visual inspection
APA, Harvard, Vancouver, ISO, and other styles
16

Bartcus, Marius. "Bayesian non-parametric parsimonious mixtures for model-based clustering." Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0010/document.

Full text
Abstract:
Cette thèse porte sur l’apprentissage statistique et l’analyse de données multi-dimensionnelles. Elle se focalise particulièrement sur l’apprentissage non supervisé de modèles génératifs pour la classification automatique. Nous étudions les modèles de mélanges Gaussians, aussi bien dans le contexte d’estimation par maximum de vraisemblance via l’algorithme EM, que dans le contexte Bayésien d’estimation par Maximum A Posteriori via des techniques d’échantillonnage par Monte Carlo. Nous considérons principalement les modèles de mélange parcimonieux qui reposent sur une décomposition spectrale de la matrice de covariance et qui offre un cadre flexible notamment pour les problèmes de classification en grande dimension. Ensuite, nous investiguons les mélanges Bayésiens non-paramétriques qui se basent sur des processus généraux flexibles comme le processus de Dirichlet et le Processus du Restaurant Chinois. Cette formulation non-paramétrique des modèles est pertinente aussi bien pour l’apprentissage du modèle, que pour la question difficile du choix de modèle. Nous proposons de nouveaux modèles de mélanges Bayésiens non-paramétriques parcimonieux et dérivons une technique d’échantillonnage par Monte Carlo dans laquelle le modèle de mélange et son nombre de composantes sont appris simultanément à partir des données. La sélection de la structure du modèle est effectuée en utilisant le facteur de Bayes. Ces modèles, par leur formulation non-paramétrique et parcimonieuse, sont utiles pour les problèmes d’analyse de masses de données lorsque le nombre de classe est indéterminé et augmente avec les données, et lorsque la dimension est grande. Les modèles proposés validés sur des données simulées et des jeux de données réelles standard. Ensuite, ils sont appliqués sur un problème réel difficile de structuration automatique de données bioacoustiques complexes issues de signaux de chant de baleine. Enfin, nous ouvrons des perspectives Markoviennes via les processus de Dirichlet hiérarchiques pour les modèles Markov cachés
This thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models
APA, Harvard, Vancouver, ISO, and other styles
17

Mukora, Audrey Etheline. "Learning curves and engineering assessment of emerging energy technologies : onshore wind." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8968.

Full text
Abstract:
Sustainable energy systems require deployment of new technologies to help tackle the challenges of climate change and ensuring energy supplies. Future sources of energy are less economically competitive than conventional technologies, but there is the potential for cost reduction. Tools for modelling technological change are important for assessing the deployment potential of early-stage technologies. Learning curves are a tool for assessing and forecasting cost reduction of a product achieved through experience from cumulative production. They are often used to assess technological improvements, but have a number of limitations for emerging energy technologies. Learning curves are aggregate in nature, representing overall cost reduction gained from learning-by-doing. However, they do not identify the actual factors behind the cost reduction. Using the case study of onshore wind energy, this PhD study focuses on combining learning curves with engineering assessment methods for improved methods of assessing and managing technical change for emerging energy technologies. A third approach, parametric modelling, provides a potential means to integrate the two methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Ion-Margineanu, Adrian. "Machine learning for classifying abnormal brain tissue progression based on multi-parametric Magnetic Resonance data." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1224/document.

Full text
Abstract:
«Machine Learning» est un champ d'étude de l'intelligence artificielle qui se concentre sur des algorithmes capables d'adapter leur paramètres en se basant sur les données observées par l'optimisation d'une fonction objective ou d'une fonction de cout. Cette discipline a soulevé l'intérêt de la communauté de la recherche biomédicale puisqu'elle permet d'améliorer la sensibilité et la spécificité de la détection et du diagnostic de nombreuses pathologies tout en augmentant l'objectivité dans le processus de prise de décision thérapeutique. L'imagerie biomédicale est devenue indispensable en médecine, puisque plusieurs modalités comme l'imagerie par résonance magnétique (IRM), la tomodensitométrie et la tomographie par émission de positron sont de plus en plus utilisées en recherche et en clinique. L'IRM est la technique d'imagerie non-invasive de référence pour l'étude du cerveau humain puisqu'elle permet dans un temps d'acquisition raisonnable d'obtenir à la fois des cartographies structurelles et fonctionnelles avec une résolution spatiale élevée. Cependant, avec l'augmentation du volume et de la complexité des données IRM, il devient de plus en plus long et difficile pour le clinicien d'intégrer toutes les données afin de prendre des décisions précises. Le but de cette thèse est de développer des méthodes de « machine learning » automatisées pour la détection de tissu cérébral anormal, en particulier dans le cas de suivi de glioblastome multiforme (GBM) et de sclérose en plaques (SEP). Les techniques d'IRM conventionnelles (IRMc) actuelles sont très utiles pour détecter les principales caractéristiques des tumeurs cérébrales et les lésions de SEP, telles que leur localisation et leur taille, mais ne sont pas suffisantes pour spécifier le grade ou prédire l'évolution de la maladie. Ainsi, les techniques d'IRM avancées, telles que l'imagerie de perfusion (PWI), de diffusion (DKI) et la spectroscopie par résonance magnétique (SRM), sont nécessaires pour apporter des informations complémentaires sur les variations du flux sanguin, de l'organisation tissulaire et du métabolisme induits par la maladie. Dans une première étude de suivi de patients GBM, seuls les paramètres d'IRM avancés ont été explorés dans un relativement petit sous-groupe de patients. Les paramètres de PWI moyens, mesurés dans les régions d'intérêts (ROI) délimités manuellement, se sont avérés être d'excellents marqueurs, puisqu'ils permettent de prédire l'évolution du GBM en moyenne un mois plus tôt que le clinicien. Dans une seconde étude, réalisée sur un échantillon plus important que la précédente, la SRM a été remplacée par l'IRMc et la quantification de la PWI et du kurtosis de diffusion (DKI) a été réalisée de manière automatique. L'extraction des paramètres d'imagerie a été effectuée sur des segmentations semi-automatiques des tumeurs, réduisant ainsi le temps nécessaire au clinicien pour la délimitation du ROI de la partie de la lésion rehaussée au produit de contraste (CE-ROI). L'application d'un algorithme modifié de «boosting» sur les paramètres extraits des ROIs a montré une grande précision pour le diagnostic du GBM. Dans une troisième, une version modifiée des cartes paramétriques de réponse (PRM) est proposée pour prendre en compte la région d'infiltration de la tumeur, réduisant toujours plus le temps nécessaire pour la délimitation de la tumeur par le clinicien, puisque toutes les images IRM sont recalées sur la première. Deux façons de générer les RPM ont été comparées, l'une basée sur l'IRMc et l'autre basée sur la PWI, ces deux paramètres étant les meilleurs pour la discrimination de l'évolution du GBM, comme le montrent les deux études précédentes. Les résultats de cette étude montrent que l'emploi de PRM basés sur l'IRMc permet d'obtenir des résultats supérieurs à ceux obtenus avec les PRM basés sur la PWI [etc…]
Machine learning is a subdiscipline in the field of artificial intelligence, which focuses on algorithms capable of adapting their parameters based on a set of observed data, by optimizing an objective or cost function. Machine learning has been the subject of large interest in the biomedical community because it can improve sensitivity and/or specificity of detection and diagnosis of any disease, while increasing the objectivity of the decision-making process. With the late increase in volume and complexity of medical data being collected, there is a clear need for applying machine learning algorithms in multi-parametric analysis for new detection and diagnostic modalities. Biomedical imaging is becoming indispensable for healthcare, as multiple modalities, such as Magnetic Resonance Imaging (MRI), Computed Tomography, and Positron Emission Tomography, are being increasingly used in both research and clinical settings. The non-invasive standard for brain imaging is MRI, as it can provide structural and functional brain maps with high resolution, all within acceptable scanning times. However, with the increase of MRI data volume and complexity, it is becoming more time consuming and difficult for clinicians to integrate all data and make accurate decisions. The aim of this thesis is to develop machine learning methods for automated preprocessing and diagnosis of abnormal brain tissues, in particular for the followup of glioblastoma multiforme (GBM) and multiple sclerosis (MS). Current conventional MRI (cMRI) techniques are very useful in detecting the main features of brain tumours and MS lesions, such as size and location, but are insufficient in specifying the grade or evolution of the disease. Therefore, the acquisition of advanced MRI, such as perfusion weighted imaging (PWI), diffusion kurtosis imaging (DKI), and magnetic resonance spectroscopic imaging (MRSI), is necessary to provide complementary information such as blood flow, tissue organisation, and metabolism, induced by pathological changes. In the GBM experiments our aim is to discriminate and predict the evolution of patients treated with standard radiochemotherapy and immunotherapy based on conventional and advanced MRI data. In the MS experiments our aim is to discriminate between healthy subjects and MS patients, as well as between different MS forms, based only on clinical and MRSI data. As a first experiment in GBM follow-up, only advanced MRI parameters were explored on a relatively small subset of patients. Average PWI parameters computed on manually delineated regions of interest (ROI) were found to be perfect biomarkers for predicting GBM evolution one month prior to the clinicians. In a second experiment in GBM follow-up of a larger subset of patients, MRSI was replaced by cMRI, while PWI and DKI parameter quantification was automated. Feature extraction was done on semi-manual tumour delineations, thereby reducing the time put by the clinician for manual delineating the contrast enhancing (CE) ROI. Learning a modified boosting algorithm on features extracted from semi-manual ROIs was shown to provide very high accuracy results for GBM diagnosis. In a third experiment in GBM follow-up of an extended subset of patients, a modified version of parametric response maps (PRM) was proposed to take into account the most likely infiltration area of the tumour, reducing even further the time a clinician would have to put for manual delineating the tumour, because all subsequent MRI scans were registered to the first one. Two types of computing PRM were compared, one based on cMRI and one based on PWI, as features extracted with these two modalities were the best in discriminating the GBM evolution, according to results from the previous two experiments. Results obtained within this last GBM analysis showed that using PRM based on cMRI is clearly superior to using PRM based on PWI [etc…]
APA, Harvard, Vancouver, ISO, and other styles
19

Mahler, Nicolas. "Machine learning methods for discrete multi-scale fows : application to finance." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00749717.

Full text
Abstract:
This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Chao. "Characterising heterogeneity of glioblastoma using multi-parametric magnetic resonance imaging." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287475.

Full text
Abstract:
A better understanding of tumour heterogeneity is central for accurate diagnosis, targeted therapy and personalised treatment of glioblastoma patients. This thesis aims to investigate whether pre-operative multi-parametric magnetic resonance imaging (MRI) can provide a useful tool for evaluating inter-tumoural and intra-tumoural heterogeneity of glioblastoma. For this purpose, we explored: 1) the utilities of habitat imaging in combining multi-parametric MRI for identifying invasive sub-regions (I & II); 2) the significance of integrating multi-parametric MRI, and extracting modality inter-dependence for patient stratification (III & IV); 3) the value of advanced physiological MRI and radiomics approach in predicting epigenetic phenotypes (V). The following observations were made: I. Using a joint histogram analysis method, habitats with different diffusivity patterns were identified. A non-enhancing sub-region with decreased isotropic diffusion and increased anisotropic diffusion was associated with progression-free survival (PFS, hazard ratio [HR] = 1.08, P < 0.001) and overall survival (OS, HR = 1.36, P < 0.001) in multivariate models. II. Using a thresholding method, two low perfusion compartments were identified, which displayed hypoxic and pro-inflammatory microenvironment. Higher lactate in the low perfusion compartment with restricted diffusion was associated with a worse survival (PFS: HR = 2.995, P = 0.047; OS: HR = 4.974, P = 0.005). III. Using an unsupervised multi-view feature selection and late integration method, two patient subgroups were identified, which demonstrated distinct OS (P = 0.007) and PFS (P < 0.001). Features selected by this approach showed significantly incremental prognostic value for 12-month OS (P = 0.049) and PFS (P = 0.022) than clinical factors. IV. Using a method of unsupervised clustering via copula transform and discrete feature extraction, three patient subgroups were identified. The subtype demonstrating high inter-dependency of diffusion and perfusion displayed higher lactate than the other two subtypes (P = 0.016 and P = 0.044, respectively). Both subtypes of low and high inter-dependency showed worse PFS compared to the intermediate subtype (P = 0.046 and P = 0.009, respectively). V. Using a radiomics approach, advanced physiological images showed better performance than structural images for predicting O6-methylguanine-DNA methyltransferase (MGMT) methylation status. For predicting 12-month PFS, the model of radiomic features and clinical factors outperformed the model of MGMT methylation and clinical factors (P = 0.010). In summary, pre-operative multi-parametric MRI shows potential for the non-invasive evaluation of glioblastoma heterogeneity, which could provide crucial information for patient care.
APA, Harvard, Vancouver, ISO, and other styles
21

Wei, Wei. "Probabilistic Models of Topics and Social Events." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/941.

Full text
Abstract:
Structured probabilistic inference has shown to be useful in modeling complex latent structures of data. One successful way in which this technique has been applied is in the discovery of latent topical structures of text data, which is usually referred to as topic modeling. With the recent popularity of mobile devices and social networking, we can now easily acquire text data attached to meta information, such as geo-spatial coordinates and time stamps. This metadata can provide rich and accurate information that is helpful in answering many research questions related to spatial and temporal reasoning. However, such data must be treated differently from text data. For example, spatial data is usually organized in terms of a two dimensional region while temporal information can exhibit periodicities. While some work existing in the topic modeling community that utilizes some of the meta information, these models largely focused on incorporating metadata into text analysis, rather than providing models that make full use of the joint distribution of metainformation and text. In this thesis, I propose the event detection problem, which is a multidimensional latent clustering problem on spatial, temporal and topical data. I start with a simple parametric model to discover independent events using geo-tagged Twitter data. The model is then improved toward two directions. First, I augmented the model using Recurrent Chinese Restaurant Process (RCRP) to discover events that are dynamic in nature. Second, I studied a model that can detect events using data from multiple media sources. I studied the characteristics of different media in terms of reported event times and linguistic patterns. The approaches studied in this thesis are largely based on Bayesian nonparametric methods to deal with steaming data and unpredictable number of clusters. The research will not only serve the event detection problem itself but also shed light into a more general structured clustering problem in spatial, temporal and textual data.
APA, Harvard, Vancouver, ISO, and other styles
22

Gurney, Rebecca L. "Stimulus Generalization to Different levels of Illumination in Paramecium caudatum." Connect to full text in OhioLINK ETD Center, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1228768700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Eamrurksiri, Araya. "Applying Machine Learning to LTE/5G Performance Trend Analysis." Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139126.

Full text
Abstract:
The core idea of this thesis is to reduce the workload of manual inspection when the performance analysis of an updated software is required. The Central Process- ing Unit (CPU) utilization, which is one of the essential factors for evaluating the performance, is analyzed. The purpose of this work is to apply machine learning techniques that are suitable for detecting the state of the CPU utilization and any changes in the test environment that affects the CPU utilization. The detection re- lies on a Markov switching model to identify structural changes, which are assumed to follow an unobserved Markov chain, in the time series data. A historical behav- ior of the data can be described by a first-order autoregression. Then, the Markov switching model becomes a Markov switching autoregressive model. Another ap- proach based on a non-parametric analysis, a distribution-free method that requires fewer assumptions, called an E-divisive method, is proposed. This method uses a hi- erarchical clustering algorithm to detect multiple change point locations in the time series data. As the data used in this analysis does not contain any ground truth, the evaluation of the methods is analyzed by generating simulated datasets with known states. Besides, these simulated datasets are used for studying and compar- ing between the Markov switching autoregressive model and the E-divisive method. Results show that the former method is preferable because of its better performance in detecting changes. Some information about the state of the CPU utilization are also obtained from performing the Markov switching model. The E-divisive method is proved to have less power in detecting changes and has a higher rate of missed detections. The results from applying the Markov switching autoregressive model to the real data are presented with interpretations and discussions.
APA, Harvard, Vancouver, ISO, and other styles
24

Elliott, Jason Lynn. "AquaMOOSE 3D: a Constructionist Approach to Math Learning Motivated by Artistic Expression." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7469.

Full text
Abstract:
Research has shown that students interest in academics declines significantly with age, especially in the areas of math and science (Kahle et al., 1993; Wigfield, 1994; Wigfield and Eccles, 1992). One approach to combating this problem is by using new technologies to engage students who otherwise would not be interested in learning. In the AquaMOOSE project, 3D graphical technology is combined with a constructionist learning philosophy to create an environment where students can creatively explore new mathematical concepts. The AquaMOOSE socio-technical system has been developed using an iterative design process. Three formal studies were conducted to assess the effectiveness of the system, as well as several smaller scale evaluations. The first study was conducted during a six-week summer program where students were able to use the AquaMOOSE system during their free time. The second study explored different learning issues in the context of a comparison-class study at a local high school where one section learned about polar coordinates using standard curriculum materials and an equivalent section learned the same material using a curriculum designed specifically around the AquaMOOSE system. The final study of the AquaMOOSE system was in an eight-week after-school program at a local high school where a balance between structure and creative freedom was explored. In this thesis, the iterative design and evaluation of the AquaMOOSE socio-technical system is presented. Evidence from this process is used to suggest implications of using 3D technology and constructionist philosophy for teaching complex mathematical content. The findings presented address issues of using constructionist learning environments for complex content and the tradeoffs of using 3D technology for educational systems.
APA, Harvard, Vancouver, ISO, and other styles
25

GONÇALVES, JÚNIOR Paulo Mauricio. "Multivariate non-parametric statistical tests to reuse classifiers in recurring concept drifting environments." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/12226.

Full text
Abstract:
Data streams are a recent processing model where data arrive continuously, in large quantities, at high speeds, so that they must be processed on-line. Besides that, several private and public institutions store large amounts of data that also must be processed. Traditional batch classi ers are not well suited to handle huge amounts of data for basically two reasons. First, they usually read the available data several times until convergence, which is impractical in this scenario. Second, they imply that the context represented by data is stable in time, which may not be true. In fact, the context change is a common situation in data streams, and is named concept drift. This thesis presents rcd, a framework that o ers an alternative approach to handle data streams that su er from recurring concept drifts. It creates a new classi er to each context found and stores a sample of the data used to build it. When a new concept drift occurs, rcd compares the new context to old ones using a non-parametric multivariate statistical test to verify if both contexts come from the same distribution. If so, the corresponding classi er is reused. If not, a new classi er is generated and stored. Three kinds of tests were performed. One compares the rcd framework with several adaptive algorithms (among single and ensemble approaches) in arti cial and real data sets, among the most used in the concept drift research area, with abrupt and gradual concept drifts. It is observed the ability of the classi ers in representing each context, how they handle concept drift, and training and testing times needed to evaluate the data sets. Results indicate that rcd had similar or better statistical results compared to the other classi ers. In the real-world data sets, rcd presented accuracies close to the best classi er in each data set. Another test compares two statistical tests (knn and Cramer) in their capability in representing and identifying contexts. Tests were performed using adaptive and batch classi ers as base learners of rcd, in arti cial and real-world data sets, with several rates-of-change. Results indicate that, in average, knn had better results compared to the Cramer test, and was also faster. Independently of the test used, rcd had higher accuracy values compared to their respective base learners. It is also presented an improvement in the rcd framework where the statistical tests are performed in parallel through the use of a thread pool. Tests were performed in three processors with di erent numbers of cores. Better results were obtained when there was a high number of detected concept drifts, the bu er size used to represent each data distribution was large, and there was a high test frequency. Even if none of these conditions apply, parallel and sequential execution still have very similar performances. Finally, a comparison between six di erent drift detection methods was also performed, comparing the predictive accuracies, evaluation times, and drift handling, including false alarm and miss detection rates, as well as the average distance to the drift point and its standard deviation.
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T18:02:08Z No. of bitstreams: 2 Tese Paulo Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-12T18:02:08Z (GMT). No. of bitstreams: 2 Tese Paulo Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-04-23
Fluxos de dados s~ao um modelo de processamento de dados recente, onde os dados chegam continuamente, em grandes quantidades, a altas velocidades, de modo que eles devem ser processados em tempo real. Al em disso, v arias institui c~oes p ublicas e privadas armazenam grandes quantidades de dados que tamb em devem ser processadas. Classi cadores tradicionais n~ao s~ao adequados para lidar com grandes quantidades de dados por basicamente duas raz~oes. Primeiro, eles costumam ler os dados dispon veis v arias vezes at e convergirem, o que e impratic avel neste cen ario. Em segundo lugar, eles assumem que o contexto representado por dados e est avel no tempo, o que pode n~ao ser verdadeiro. Na verdade, a mudan ca de contexto e uma situa c~ao comum em uxos de dados, e e chamado de mudan ca de conceito. Esta tese apresenta o rcd, uma estrutura que oferece uma abordagem alternativa para lidar com os uxos de dados que sofrem de mudan cas de conceito recorrentes. Ele cria um novo classi cador para cada contexto encontrado e armazena uma amostra dos dados usados para constru -lo. Quando uma nova mudan ca de conceito ocorre, rcd compara o novo contexto com os antigos, utilizando um teste estat stico n~ao param etrico multivariado para veri car se ambos os contextos prov^em da mesma distribui c~ao. Se assim for, o classi cador correspondente e reutilizado. Se n~ao, um novo classi cador e gerado e armazenado. Tr^es tipos de testes foram realizados. Um compara o rcd com v arios algoritmos adaptativos (entre as abordagens individuais e de agrupamento) em conjuntos de dados arti ciais e reais, entre os mais utilizados na area de pesquisa de mudan ca de conceito, com mudan cas bruscas e graduais. E observada a capacidade dos classi cadores em representar cada contexto, como eles lidam com as mudan cas de conceito e os tempos de treinamento e teste necess arios para avaliar os conjuntos de dados. Os resultados indicam que rcd teve resultados estat sticos semelhantes ou melhores, em compara c~ao com os outros classi cadores. Nos conjuntos de dados do mundo real, rcd apresentou precis~oes pr oximas do melhor classi cador em cada conjunto de dados. Outro teste compara dois testes estat sticos (knn e Cramer) em suas capacidades de representar e identi car contextos. Os testes foram realizados utilizando classi cadores xi xii RESUMO tradicionais e adaptativos como base do rcd, em conjuntos de dados arti ciais e do mundo real, com v arias taxas de varia c~ao. Os resultados indicam que, em m edia, KNN obteve melhores resultados em compara c~ao com o teste de Cramer, al em de ser mais r apido. Independentemente do crit erio utilizado, rcd apresentou valores mais elevados de precis~ao em compara c~ao com seus respectivos classi cadores base. Tamb em e apresentada uma melhoria do rcd onde os testes estat sticos s~ao executadas em paralelo por meio do uso de um pool de threads. Os testes foram realizados em tr^es processadores com diferentes n umeros de n ucleos. Melhores resultados foram obtidos quando houve um elevado n umero de mudan cas de conceito detectadas, o tamanho das amostras utilizadas para representar cada distribui c~ao de dados era grande, e havia uma alta freq u^encia de testes. Mesmo que nenhuma destas condi c~oes se aplicam, a execu c~ao paralela e seq uencial ainda t^em performances muito semelhantes. Finalmente, uma compara c~ao entre seis diferentes m etodos de detec c~ao de mudan ca de conceito tamb em foi realizada, comparando a precis~ao, os tempos de avalia c~ao, manipula c~ao das mudan cas de conceito, incluindo as taxas de falsos positivos e negativos, bem como a m edia da dist^ancia ao ponto de mudan ca e o seu desvio padr~ao.
APA, Harvard, Vancouver, ISO, and other styles
26

Gonçalves, Júnior Paulo Mauricio. "Multivariate non-parametric statistical tests to reuse classifiers in recurring concept drifting environments." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/12288.

Full text
Abstract:
Data streams are a recent processing model where data arrive continuously, in large quantities, at high speeds, so that they must be processed on-line. Besides that, several private and public institutions store large amounts of data that also must be processed. Traditional batch classi ers are not well suited to handle huge amounts of data for basically two reasons. First, they usually read the available data several times until convergence, which is impractical in this scenario. Second, they imply that the context represented by data is stable in time, which may not be true. In fact, the context change is a common situation in data streams, and is named concept drift. This thesis presents rcd, a framework that o ers an alternative approach to handle data streams that su er from recurring concept drifts. It creates a new classi er to each context found and stores a sample of the data used to build it. When a new concept drift occurs, rcd compares the new context to old ones using a non-parametric multivariate statistical test to verify if both contexts come from the same distribution. If so, the corresponding classi er is reused. If not, a new classi er is generated and stored. Three kinds of tests were performed. One compares the rcd framework with several adaptive algorithms (among single and ensemble approaches) in arti cial and real data sets, among the most used in the concept drift research area, with abrupt and gradual concept drifts. It is observed the ability of the classi ers in representing each context, how they handle concept drift, and training and testing times needed to evaluate the data sets. Results indicate that rcd had similar or better statistical results compared to the other classi ers. In the real-world data sets, rcd presented accuracies close to the best classi er in each data set. Another test compares two statistical tests (knn and Cramer) in their capability in representing and identifying contexts. Tests were performed using adaptive and batch classi ers as base learners of rcd, in arti cial and real-world data sets, with several rates-of-change. Results indicate that, in average, knn had better results compared to the Cramer test, and was also faster. Independently of the test used, rcd had higher accuracy values compared to their respective base learners. It is also presented an improvement in the rcd framework where the statistical tests are performed in parallel through the use of a thread pool. Tests were performed in three processors with di erent numbers of cores. Better results were obtained when there was a high number of detected concept drifts, the bu er size used to represent each data distribution was large, and there was a high test frequency. Even if none of these conditions apply, parallel and sequential execution still have very similar performances. Finally, a comparison between six di erent drift detection methods was also performed, comparing the predictive accuracies, evaluation times, and drift handling, including false alarm and miss detection rates, as well as the average distance to the drift point and its standard deviation.
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T19:25:11Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Paulo Mauricio Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5)
Made available in DSpace on 2015-03-12T19:25:11Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Paulo Mauricio Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) Previous issue date: 2013-04-23
Fluxos de dados s~ao um modelo de processamento de dados recente, onde os dados chegam continuamente, em grandes quantidades, a altas velocidades, de modo que eles devem ser processados em tempo real. Al em disso, v arias institui c~oes p ublicas e privadas armazenam grandes quantidades de dados que tamb em devem ser processadas. Classi cadores tradicionais n~ao s~ao adequados para lidar com grandes quantidades de dados por basicamente duas raz~oes. Primeiro, eles costumam ler os dados dispon veis v arias vezes at e convergirem, o que e impratic avel neste cen ario. Em segundo lugar, eles assumem que o contexto representado por dados e est avel no tempo, o que pode n~ao ser verdadeiro. Na verdade, a mudan ca de contexto e uma situa c~ao comum em uxos de dados, e e chamado de mudan ca de conceito. Esta tese apresenta o rcd, uma estrutura que oferece uma abordagem alternativa para lidar com os uxos de dados que sofrem de mudan cas de conceito recorrentes. Ele cria um novo classi cador para cada contexto encontrado e armazena uma amostra dos dados usados para constru -lo. Quando uma nova mudan ca de conceito ocorre, rcd compara o novo contexto com os antigos, utilizando um teste estat stico n~ao param etrico multivariado para veri car se ambos os contextos prov^em da mesma distribui c~ao. Se assim for, o classi cador correspondente e reutilizado. Se n~ao, um novo classi cador e gerado e armazenado. Tr^es tipos de testes foram realizados. Um compara o rcd com v arios algoritmos adaptativos (entre as abordagens individuais e de agrupamento) em conjuntos de dados arti ciais e reais, entre os mais utilizados na area de pesquisa de mudan ca de conceito, com mudan cas bruscas e graduais. E observada a capacidade dos classi cadores em representar cada contexto, como eles lidam com as mudan cas de conceito e os tempos de treinamento e teste necess arios para avaliar os conjuntos de dados. Os resultados indicam que rcd teve resultados estat sticos semelhantes ou melhores, em compara c~ao com os outros classi cadores. Nos conjuntos de dados do mundo real, rcd apresentou precis~oes pr oximas do melhor classi cador em cada conjunto de dados. Outro teste compara dois testes estat sticos (knn e Cramer) em suas capacidades de representar e identi car contextos. Os testes foram realizados utilizando classi cadores tradicionais e adaptativos como base do rcd, em conjuntos de dados arti ciais e do mundo real, com v arias taxas de varia c~ao. Os resultados indicam que, em m edia, KNN obteve melhores resultados em compara c~ao com o teste de Cramer, al em de ser mais r apido. Independentemente do crit erio utilizado, rcd apresentou valores mais elevados de precis~ao em compara c~ao com seus respectivos classi cadores base. Tamb em e apresentada uma melhoria do rcd onde os testes estat sticos s~ao executadas em paralelo por meio do uso de um pool de threads. Os testes foram realizados em tr^es processadores com diferentes n umeros de n ucleos. Melhores resultados foram obtidos quando houve um elevado n umero de mudan cas de conceito detectadas, o tamanho das amostras utilizadas para representar cada distribui c~ao de dados era grande, e havia uma alta freq u^encia de testes. Mesmo que nenhuma destas condi c~oes se aplicam, a execu c~ao paralela e seq uencial ainda t^em performances muito semelhantes. Finalmente, uma compara c~ao entre seis diferentes m etodos de detec c~ao de mudan ca de conceito tamb em foi realizada, comparando a precis~ao, os tempos de avalia c~ao, manipula c~ao das mudan cas de conceito, incluindo as taxas de falsos positivos e negativos, bem como a m edia da dist^ancia ao ponto de mudan ca e o seu desvio padr~ao.
APA, Harvard, Vancouver, ISO, and other styles
27

Aghazadeh, Omid. "Data Driven Visual Recognition." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145865.

Full text
Abstract:
This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems. In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them. In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model. We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate. We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven.

QC 20140604

APA, Harvard, Vancouver, ISO, and other styles
28

van, der Wilk Mark. "Sparse Gaussian process approximations and applications." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/288347.

Full text
Abstract:
Many tasks in machine learning require learning some kind of input-output relation (function), for example, recognising handwritten digits (from image to number) or learning the motion behaviour of a dynamical system like a pendulum (from positions and velocities now to future positions and velocities). We consider this problem using the Bayesian framework, where we use probability distributions to represent the state of uncertainty that a learning agent is in. In particular, we will investigate methods which use Gaussian processes to represent distributions over functions. Gaussian process models require approximations in order to be practically useful. This thesis focuses on understanding existing approximations and investigating new ones tailored to specific applications. We advance the understanding of existing techniques first through a thorough review. We propose desiderata for non-parametric basis function model approximations, which we use to assess the existing approximations. Following this, we perform an in-depth empirical investigation of two popular approximations (VFE and FITC). Based on the insights gained, we propose a new inter-domain Gaussian process approximation, which can be used to increase the sparsity of the approximation, in comparison to regular inducing point approximations. This allows GP models to be stored and communicated more compactly. Next, we show that inter-domain approximations can also allow the use of models which would otherwise be impractical, as opposed to improving existing approximations. We introduce an inter-domain approximation for the Convolutional Gaussian process - a model that makes Gaussian processes suitable to image inputs, and which has strong relations to convolutional neural networks. This same technique is valuable for approximating Gaussian processes with more general invariance properties. Finally, we revisit the derivation of the Gaussian process State Space Model, and discuss some subtleties relating to their approximation. We hope that this thesis illustrates some benefits of non-parametric models and their approximation in a non-parametric fashion, and that it provides models and approximations that prove to be useful for the development of more complex and performant models in the future.
APA, Harvard, Vancouver, ISO, and other styles
29

Ameyaw, Daniel Adofo [Verfasser], and Dirk [Akademischer Betreuer] Söffker. "New parametric evaluation and fusion strategy for vibration diagnosis systems and classification approaches applied to machine learning and computer vision systems / Daniel Adofo Ameyaw ; Betreuer: Dirk Söffker." Duisburg, 2020. http://d-nb.info/1218465220/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Jokhadar, Amer. "Towards a socio-spatial parametric grammar for sustainable tall residential buildings in hot-arid regions : learning from the vernacular model of the Middle East and North Africa." Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/111874/.

Full text
Abstract:
In the Middle East and North Africa (MENA) region, high-rise buildings could be considered as a hallmark of the contemporary cityscape, and a solution for the continuous urbanisation. Many benefits, such as preserving natural and green spaces in the city, and increasing the access to views, light, and air at height could be achieved. However, several impacts of such buildings could affect the social life of residents. The social dimension in recent developments has considerably less attention than economic and environmental dimensions. This research aims to develop a method for addressing the social aspect in the design of high-rise residential buildings, which could enhance the social life between neighbours, and improve the well-being qualities, such as privacy and security. Computation, as a tool for manipulating ideas, managing design parameters, and solving problems, is adopted to create synergies amongst a community’s cultural, social, and environmental aspects. Currently, the main focus of computational models is primarily limited to building performance, optimisation, and functional requirements. Yet, qualitative factors, such as social, cultural and contextual aspects are also important as they. The study aims to offer architects a computational tool that guides the emergence of sustainable solutions for high-rise residential buildings, and leads the building to be in harmony with the context and preferences of users. According to social survey conducted by the researcher in the study area, through distributing questionnaires to families from 17 countries, results of 173 responses showed that there are lower levels of social interaction between neighbours in contemporary buildings due to the lack of gathering areas. Moreover, the excessive use of glazed facades, and the sudden transition from public to private zones, destructed the privacy of the family and the specifics of the cultural context. On the other hand, the survey exposed potentials and impacts of vernacular houses and neighbourhoods on residents that could have effects on social interaction between families and their privacy. Yet, the vernacular model might not be compatible with the requirements of modern constructions while employing the latest technologies and materials. The study adopted a critical regionalism approach that creates a balance between tradition and the importance of progress and development. A systematic model of analysis, which combines ‘spatial reasoning’ and ‘space syntax’ methods, was suggested to discover the morphology of vernacular houses and neighbourhoods, and explore spatial topologies that have social or experiential significance. The model added new aspects, such as hierarchy of spaces, orientation, type of enclosure, shared surfaces, and geometric properties of spaces, to the justified graph of Hiller and Hanson, as a representation of formal and social realities. A total of 13 social indicators, with different units of representation, such as numbers, diagrams, and textual descriptions, were identified, and used to define spatial parameters,rules, and constraints. Results extracted from the analytical process for historical cases showed that courtyards, public spaces, and hierarchy of spaces are major features that have potentials to create a balance between social interaction and privacy. These results were combined with principles of shape grammars, and transformed into spatial rules that are associated with parameters and descriptions. Grammars that address the design of vernacular houses and neighbourhoods, were combined with requirements of high-rise buildings, and used for the construction of a parametric computational tool for the design of vertical residential developments. The developed tool supports the recognition of the design brief for high-rise residential buildings, with the possibility of changing geometric and spatial parameters. Moreover, it offers an alternative method for implementing strategies of social sustainability and maximising the connection with the context, culture, and people. The tool was used by the researcher, in addition to professionals and architecture students through an experimental study, to generate different solutions for high-rise residential buildings. The analysis of new alternatives showed that most cases achieved successfully principles of social sustainability. Moreover, usability evaluation for the tool that assesses the efficiency of the tool in the early stage of the design was conducted through distributing a questionnaire on the same participants. Results of the evaluation process showed that the developed interface offers designers a tool to investigate a class of satisfactory design alternatives that are not expected rather than a single best solution. It gives the user a flexible way to capture the relationship between public and private zones, and insert a series of public courtyards distributed on the different levels of the building, with the possibility of generating a private courtyard inside each apartment. Such a process, which is managed by a set of predefined social and spatial rules, confirming the design process as a balance between creativity and rationality. Moreover, it is a transition from standard mass buildings to contemporary-vernacular projects that respect the cultural context, climate, and people.
APA, Harvard, Vancouver, ISO, and other styles
31

Ibrahim, Ayman Wagdy Mohamed. "Predicting glare in open-plan offices using simplified data acquisitions and machine learning algorithms." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204266/1/Ayman%20Wagdy%20Mohamed_Ibrahim_Thesis.pdf.

Full text
Abstract:
Glare in open-plan offices can negatively affect the productivity and well-being of office workers. Accurate glare prediction is challenging, as occupants' sensitivity to glare may differ under the same conditions. Developed as part of an ARC Linkage Project, this thesis challenges the limitations prevalent in current glare metrics by delivering a new model of predicting glare for open-plan offices. By utilising machine learning (ML) techniques, more accurate tools and methods are unlocked to assist architects and lighting engineers in the early stages of the design process. They ultimately enable more efficient daylit office designs with reduced glare discomfort in Australia.
APA, Harvard, Vancouver, ISO, and other styles
32

Le, Mounier Audrey. "Méta-optimisation pour la calibration automatique de modèles énergétiques bâtiment pour le pilotage anticipatif." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT038/document.

Full text
Abstract:
Face aux enjeux climatiques actuels, le secteur bâtiment est encouragé à réduire sa consommation énergétique tout en préservant le confort des occupants. C’est dans ce contexte que s’inscrit le projet ANR PRECCISION qui vise au développement d’outils et de méthodes pour la gestion énergétique optimisée des bâtiments qui nécessitent l’utilisation de modèles thermiques dynamiques. Les travaux de thèse, effectués entre le G2Elab et le G-SCOP, se sont focalisés sur les problématiques liées à l’estimation paramétrique de ces modèles. En effet, les incertitudes liées aux phénomènes mal maîtrisés et la nature des modèles rendent le calibrage des paramètres des modèles délicat. Cette procédure complexe n’est à ce jour pas systématisable : les modèles auto-regressifs ont une faible capacité d'extrapolation car leur structure est inadaptée, tandis que les modèles physiques sont non-linéaires par rapport à de nombreux paramètres : les estimations conduisent à des optimums locaux fortement dépendant de l'initialisation. Pour lever ce verrou, plusieurs approches ont été explorées à partir de modèles physiques adaptés pour lesquels des études sur l’identifiabilité ont été menées sur une plateforme expérimentale : PREDIS MHI. Différentes stratégies d'optimisation sont alors proposées visant à déterminer les paramètres qui peuvent être recalés. La première approche repose sur une analyse a priori de la dispersion paramétrique, la seconde repose sur une procédure de méta-optimisation qui détermine dynamiquement, au fur et à mesure d'une séquence d'optimisations, les paramètres à recaler. Les résultats sont analysés et comparés à diverses approches (modèles universels, identification « naïve » de tous les paramètres d’un modèle physique, algorithme génétique, …) à travers différents cas d'application
In order to tackle the actual climate issues, the building field is encouraged to reduce his energetic consumption without changing the occupant’s comfort. In this context, the aim of the ANR PRECCISION project is to develop tools and methods for energetic management of the buildings which needs the use of dynamical thermal models. The PHD works, realise between the G2Elab and the G-SCOP, was focused on models parametric estimation issues. Indeed, uncertainties due to unknown phenomena and the nature of models lead to difficulties for the calibration of the models. Nowadays, this complex procedure is still not automatable: auto-regressive models have a low capacity to extrapolate because of their inadequate structure, whereas the physical models are non-linear regarding many parameters: estimations lead towards local optimums which highly depend on the initial point. In order to eliminate these constraints, several approaches have been explored with physical models adapted for which identifiability studies have been reached on an experimental platform: PREDIS MHI. Different optimisation strategies will be proposed in order to determine the parameters which can be estimated. The first approach uses an analyse a priori of the parametric dispersion, the second one use a meta optimisation which dynamicaly determined as the optimisation sequence, the parameters which can be readjusted. The results are analysed and compared to several approaches (universal models, “simple” identification of all the parameters of a physical model, genetic algorithm …) in different application cases
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.

Full text
Abstract:
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
APA, Harvard, Vancouver, ISO, and other styles
34

Shandilya, Sharad. "ASSESSMENT AND PREDICTION OF CARDIOVASCULAR STATUS DURING CARDIAC ARREST THROUGH MACHINE LEARNING AND DYNAMICAL TIME-SERIES ANALYSIS." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3198.

Full text
Abstract:
In this work, new methods of feature extraction, feature selection, stochastic data characterization/modeling, variance reduction and measures for parametric discrimination are proposed. These methods have implications for data mining, machine learning, and information theory. A novel decision-support system is developed in order to guide intervention during cardiac arrest. The models are built upon knowledge extracted with signal-processing, non-linear dynamic and machine-learning methods. The proposed ECG characterization, combined with information extracted from PetCO2 signals, shows viability for decision-support in clinical settings. The approach, which focuses on integration of multiple features through machine learning techniques, suits well to inclusion of multiple physiologic signals. Ventricular Fibrillation (VF) is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. For a given desired sensitivity, the proposed model provides a significantly higher accuracy and specificity as compared to the state-of-the-art. Notably, within the range of 80-90% of sensitivity, the method provides about 40% higher specificity. This means that when trained to have the same level of sensitivity, the model will yield far fewer false positives (unnecessary shocks). Also introduced is a new model that predicts recurrence of arrest after a successful countershock is delivered. To date, no other work has sought to build such a model. I validate the method by reporting multiple performance metrics calculated on (blind) test sets.
APA, Harvard, Vancouver, ISO, and other styles
35

Martens, Corentin. "Patient-Derived Tumour Growth Modelling from Multi-Parametric Analysis of Combined Dynamic PET/MR Data." Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/320127/5/contratCM.pdf.

Full text
Abstract:
Gliomas are the most common primary brain tumours and are associated with poor prognosis. Among them, diffuse gliomas – which include their most aggressive form glioblastoma (GBM) – are known to be highly infiltrative. The diagnosis and follow-up of gliomas rely on positron emission tomography (PET) and magnetic resonance imaging (MRI). However, these imaging techniques do not currently allow to assess the whole extent of such infiltrative tumours nor to anticipate their preferred invasion patterns, leading to sub-optimal treatment planning. Mathematical tumour growth modelling has been proposed to address this problem. Reaction-diffusion tumour growth models, which are probably the most commonly used for diffuse gliomas growth modelling, propose to capture the proliferation and migration of glioma cells by means of a partial differential equation. Although the potential of such models has been shown in many works for patient follow-up and therapy planning, only few limited clinical applications have seemed to emerge from these works. This thesis aims at revisiting reaction-diffusion tumour growth models using state-of-the-art medical imaging and data processing technologies, with the objective of integrating multi-parametric PET/MRI data to further personalise the model. Brain tissue segmentation on MR images is first addressed with the aim of defining a patient-specific domain to solve the model. A previously proposed method to derive a tumour cell diffusion tensor from the water diffusion tensor assessed by diffusion-tensor imaging (DTI) is then implemented to guide the anisotropic migration of tumour cells along white matter tracts. The use of dynamic [S-methyl-11C]methionine ([11C]MET) PET is also investigated to derive patient-specific proliferation potential maps for the model. These investigations lead to the development of a microscopic compartmental model for amino acid PET tracer transport in gliomas. Based on the compartmental model results, a novel methodology is proposed to extract parametric maps from dynamic [11C]MET PET data using principal component analysis (PCA). The problem of estimating the initial conditions of the model from MR images is then addressed by means of a translational MRI/histology study in a case of non-operated GBM. Numerical solving strategies based on the widely used finite difference and finite element methods are finally implemented and compared. All these developments are embedded within a common framework allowing to study glioma growth in silico and providing a solid basis for further research in this field. However, commonly accepted hypothesis relating the outlines of abnormalities visible on MRI to tumour cell density iso-contours have been invalidated by the translational study carried out, leaving opened the questions of the initialisation and the validation of the model. Furthermore, the analysis of the temporal evolution of real multi-treated glioma patients demonstrates the limitations of the formulated model. These latter statements highlight current obstacles to the clinical application of reaction-diffusion tumour growth models and pave the way to further improvements.
Les gliomes sont les tumeurs cérébrales primitives les plus communes et sont associés à un mauvais pronostic. Parmi ces derniers, les gliomes diffus – qui incluent la forme la plus agressive, le glioblastome (GBM) – sont connus pour être hautement infiltrants. Le diagnostic et le suivi des gliomes s'appuient sur la tomographie par émission de positons (TEP) ainsi que l'imagerie par résonance magnétique (IRM). Cependant, ces techniques d'imagerie ne permettent actuellement pas d'évaluer l'étendue totale de tumeurs aussi infiltrantes ni d'anticiper leurs schémas d'invasion préférentiels, conduisant à une planification sous-optimale du traitement. La modélisation mathématique de la croissance tumorale a été proposée pour répondre à ce problème. Les modèles de croissance tumorale de type réaction-diffusion, qui sont probablement les plus communément utilisés pour la modélisation de la croissance des gliomes diffus, proposent de capturer la prolifération et la migration des cellules tumorales au moyen d'une équation aux dérivées partielles. Bien que le potentiel de tels modèles ait été démontré dans de nombreux travaux pour le suivi des patients et la planification de thérapies, seules quelques applications cliniques restreintes semblent avoir émergé de ces derniers. Ce travail de thèse a pour but de revisiter les modèles de croissance tumorale de type réaction-diffusion en utilisant des technologies de pointe en imagerie médicale et traitement de données, avec pour objectif d'y intégrer des données TEP/IRM multi-paramétriques pour personnaliser davantage le modèle. Le problème de la segmentation des tissus cérébraux dans les images IRM est d'abord adressé, avec pour but de définir un domaine propre au patient pour la résolution du modèle. Une méthode proposée précédemment permettant de dériver un tenseur de diffusion tumoral à partir du tenseur de diffusion de l'eau évalué par imagerie DTI a ensuite été implémentée afin de guider la migration anisotrope des cellules tumorales le long des fibres de matière blanche. L'utilisation de l'imagerie TEP dynamique à la [S-méthyl-11C]méthionine ([11C]MET) est également investiguée pour la génération de cartes de potentiel prolifératif propre au patient afin de nourrir le modèle. Ces investigations ont mené au développement d'un modèle compartimental pour le transport des traceurs TEP dérivés des acides aminés dans les gliomes. Sur base des résultats du modèle compartimental, une nouvelle méthodologie est proposée utilisant l'analyse en composantes principales pour extraire des cartes paramétriques à partir de données TEP dynamiques à la [11C]MET. Le problème de l'estimation des conditions initiales du modèle à partir d'images IRM est ensuite adressé par le biais d'une étude translationelle combinant IRM et histologie menée sur un cas de GBM non-opéré. Différentes stratégies de résolution numérique basées sur les méthodes des différences et éléments finis sont finalement implémentées et comparées. Tous ces développements sont embarqués dans un framework commun permettant d'étudier in silico la croissance des gliomes et fournissant une base solide pour de futures recherches dans le domaine. Cependant, certaines hypothèses communément admises reliant les délimitations des anormalités visibles en IRM à des iso-contours de densité de cellules tumorales ont été invalidée par l'étude translationelle menée, laissant ouverte les questions de l'initialisation et de la validation du modèle. Par ailleurs, l'analyse de l'évolution temporelle de cas réels de gliomes multi-traités démontre les limitations du modèle. Ces dernières affirmations mettent en évidence les obstacles actuels à l'application clinique de tels modèles et ouvrent la voie à de nouvelles possibilités d'amélioration.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
36

Cherief-Abdellatif, Badr-Eddine. "Contributions to the theoretical study of variational inference and robustness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG001.

Full text
Abstract:
Cette thèse de doctorat traite de l'inférence variationnelle et de la robustesse en statistique et en machine learning. Plus précisément, elle se concentre sur les propriétés statistiques des approximations variationnelles et sur la conception d'algorithmes efficaces pour les calculer de manière séquentielle, et étudie les estimateurs basés sur le Maximum Mean Discrepancy comme règles d'apprentissage qui sont robustes à la mauvaise spécification du modèle.Ces dernières années, l'inférence variationnelle a été largement étudiée du point de vue computationnel, cependant, la littérature n'a accordé que peu d'attention à ses propriétés théoriques jusqu'à très récemment. Dans cette thèse, nous étudions la consistence des approximations variationnelles dans divers modèles statistiques et les conditions qui assurent leur consistence. En particulier, nous abordons le cas des modèles de mélange et des réseaux de neurones profonds. Nous justifions également d'un point de vue théorique l'utilisation de la stratégie de maximisation de l'ELBO, un critère numérique qui est largement utilisé dans la communauté VB pour la sélection de modèle et dont l'efficacité a déjà été confirmée en pratique. En outre, l'inférence Bayésienne offre un cadre d'apprentissage en ligne attrayant pour analyser des données séquentielles, et offre des garanties de généralisation qui restent valables même en cas de mauvaise spécification des modèles et en présence d'adversaires. Malheureusement, l'inférence Bayésienne exacte est rarement tractable en pratique et des méthodes d'approximation sont généralement employées, mais ces méthodes préservent-elles les propriétés de généralisation de l'inférence Bayésienne ? Dans cette thèse, nous montrons que c'est effectivement le cas pour certains algorithmes d'inférence variationnelle (VI). Nous proposons de nouveaux algorithmes tempérés en ligne et nous en déduisons des bornes de généralisation. Notre résultat théorique repose sur la convexité de l'objectif variationnel, mais nous soutenons que notre résultat devrait être plus général et présentons des preuves empiriques à l'appui. Notre travail donne des justifications théoriques en faveur des algorithmes en ligne qui s'appuient sur des méthodes Bayésiennes approchées.Une autre question d'intérêt majeur en statistique qui est abordée dans cette thèse est la conception d'une procédure d'estimation universelle. Cette question est d'un intérêt majeur, notamment parce qu'elle conduit à des estimateurs robustes, un thème d'actualité en statistique et en machine learning. Nous abordons le problème de l'estimation universelle en utilisant un estimateur de minimisation de distance basé sur la Maximum Mean Discrepancy. Nous montrons que l'estimateur est robuste à la fois à la dépendance et à la présence de valeurs aberrantes dans le jeu de données. Nous mettons également en évidence les liens qui peuvent exister avec les estimateurs de minimisation de distance utilisant la distance L2. Enfin, nous présentons une étude théorique de l'algorithme de descente de gradient stochastique utilisé pour calculer l'estimateur, et nous étayons nos conclusions par des simulations numériques. Nous proposons également une version Bayésienne de notre estimateur, que nous étudions à la fois d'un point de vue théorique et d'un point de vue computationnel
This PhD thesis deals with variational inference and robustness. More precisely, it focuses on the statistical properties of variational approximations and the design of efficient algorithms for computing them in an online fashion, and investigates Maximum Mean Discrepancy based estimators as learning rules that are robust to model misspecification.In recent years, variational inference has been extensively studied from the computational viewpoint, but only little attention has been put in the literature towards theoretical properties of variational approximations until very recently. In this thesis, we investigate the consistency of variational approximations in various statistical models and the conditions that ensure the consistency of variational approximations. In particular, we tackle the special case of mixture models and deep neural networks. We also justify in theory the use of the ELBO maximization strategy, a model selection criterion that is widely used in the Variational Bayes community and is known to work well in practice.Moreover, Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even under model mismatch and with adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference? In this thesis, we show that this is indeed the case for some variational inference algorithms. We propose new online, tempered variational algorithms and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that our result should hold more generally and present empirical evidence in support of this. Our work presents theoretical justifications in favor of online algorithms that rely on approximate Bayesian methods. Another point that is addressed in this thesis is the design of a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. We tackle the problem of universal estimation using a minimum distance estimator based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations. We also propose a Bayesian version of our estimator, that we study from both a theoretical and a computational points of view
APA, Harvard, Vancouver, ISO, and other styles
37

Hall, Otto. "Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214791.

Full text
Abstract:
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research.
Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
APA, Harvard, Vancouver, ISO, and other styles
38

Zheng, Wenjing. "Apprentissage ciblé et Big Data : contribution à la réconciliation de l'estimation adaptative et de l’inférence statistique." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB044/document.

Full text
Abstract:
Cette thèse porte sur le développement de méthodes semi-paramétriques robustes pour l'inférence de paramètres complexes émergeant à l'interface de l'inférence causale et la biostatistique. Ses motivations sont les applications à la recherche épidémiologique et médicale à l'ère des Big Data. Nous abordons plus particulièrement deux défis statistiques pour réconcilier, dans chaque contexte, estimation adaptative et inférence statistique. Le premier défi concerne la maximisation de l'information tirée d'essais contrôlés randomisés (ECRs) grâce à la conception d'essais adaptatifs. Nous présentons un cadre théorique pour la construction et l'analyse d'ECRs groupes-séquentiels, réponses-adaptatifs et ajustés aux covariable (traduction de l'expression anglaise « group-sequential, response-adaptive, covariate-adjusted », d'où l'acronyme CARA) qui permettent le recours à des procédures adaptatives d'estimation à la fois pour la construction dynamique des schémas de randomisation et pour l'estimation du modèle de réponse conditionnelle. Ce cadre enrichit la littérature existante sur les ECRs CARA notamment parce que l'estimation des effets est garantie robuste même lorsque les modèles sur lesquels s'appuient les procédures adaptatives d'estimation sont mal spécificiés. Le second défi concerne la mise au point et l'étude asymptotique d'une procédure inférentielle semi-paramétrique avec estimation adaptative des paramètres de nuisance. A titre d'exemple, nous choisissons comme paramètre d'intérêt la différence des risques marginaux pour un traitement binaire. Nous proposons une version cross-validée du principe d'inférence par minimisation ciblée de pertes (« Cross-validated Targeted Mimum Loss Estimation » en anglais, d'où l'acronyme CV-TMLE) qui, comme son nom le suggère, marie la procédure TMLE classique et le principe de la validation croisée. L'estimateur CV-TMLE ainsi élaboré hérite de la propriété typique de double-robustesse et aussi des propriétés d'efficacité du TMLE classique. De façon remarquable, le CV-TMLE est linéairement asymptotique sous des conditions minimales, sans recourir aux conditions de type Donsker
This dissertation focuses on developing robust semiparametric methods for complex parameters that emerge at the interface of causal inference and biostatistics, with applications to epidemiological and medical research in the era of Big Data. Specifically, we address two statistical challenges that arise in bridging the disconnect between data-adaptive estimation and statistical inference. The first challenge arises in maximizing information learned from Randomized Control Trials (RCT) through the use of adaptive trial designs. We present a framework to construct and analyze group sequential covariate-adjusted response-adaptive (CARA) RCTs that admits the use of data-adaptive approaches in constructing the randomization schemes and in estimating the conditional response model. This framework adds to the existing literature on CARA RCTs by allowing flexible options in both their design and analysis and by providing robust effect estimates even under model mis-specifications. The second challenge arises from obtaining a Central Limit Theorem when data-adaptive estimation is used to estimate the nuisance parameters. We consider as target parameter of interest the marginal risk difference of the outcome under a binary treatment, and propose a Cross-validated Targeted Minimum Loss Estimator (TMLE), which augments the classical TMLE with a sample-splitting procedure. The proposed Cross-Validated TMLE (CV-TMLE) inherits the double robustness properties and efficiency properties of the classical TMLE , and achieves asymptotic linearity at minimal conditions by avoiding the Donsker class condition
APA, Harvard, Vancouver, ISO, and other styles
39

Niaf, Émilie. "Aide au diagnostic du cancer de la prostate par IRM multi-paramétrique : une approche par classification supervisée." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10271/document.

Full text
Abstract:
Le cancer de la prostate est la deuxième cause de mortalité chez l’homme en France. L’IRM multiparamétrique est considérée comme la technique la plus prometteuse pour permettre une cartographie du cancer, ouvrant la voie au traitement focal, alternatif à la prostatectomie radicale. Néanmoins, elle reste difficile à interpréter et est sujette à une forte variabilité inter- et intra-expert, d’où la nécessité de développer des systèmes experts capables d’aider le radiologue dans son diagnostic. Nous proposons un système original d’aide au diagnostic (CAD) offrant un second avis au radiologue sur des zones suspectes pointées sur l’image. Nous évaluons notre système en nous appuyant sur une base de données clinique de 30 patients, annotées de manière fiable et exhaustive grâce à l’analyse des coupes histologiques obtenues par prostatectomie. Les performances mesurées dans des conditions cliniques auprès de 12 radiologues, sans et avec notre outil, démontrent l’apport significatif de ce CAD sur la qualité du diagnostic, la confiance des radiologues et la variabilité inter-expert. La création d’une base de corrélations anatomo-radiologiques est une tâche complexe et fastidieuse. Beaucoup d’études n’ont pas d’autre choix que de s’appuyer sur l’analyse subjective d’un radiologue expert, entâchée d’incertitude. Nous proposons un nouveau schéma de classification, basé sur l’algorithme du séparateur à vaste marge (SVM), capable d’intégrer, dans la fonction d’apprentissage, l’incertitude sur l’appartenance à une classe (ex. sain/malin) de certains échantillons de la base d’entraînement. Les résultats obtenus, tant sur des exemples simulés que sur notre base de données cliniques, démontrent le potentiel de ce nouvel algorithme, en particulier pour les applications CAD, mais aussi de manière plus générale pour toute application de machine learning s’appuyant sur un étiquetage quantitatif des données
Prostate cancer is one of the leading cause of death in France. Multi-parametric MRI is considered the most promising technique for cancer visualisation, opening the way to focal treatments as an alternative to prostatectomy. Nevertheless, its interpretation remains difficult and subject to inter- and intra-observer variability, which motivates the development of expert systems to assist radiologists in making their diagnosis. We propose an original computer-aided diagnosis system returning a malignancy score to any suspicious region outlined on MR images, which can be used as a second view by radiologists. The CAD performances are evaluated based on a clinical database of 30 patients, exhaustively and reliably annotated thanks to the histological ground truth obtained via prostatectomy. Finally, we demonstrate the influence of this system in clinical condition based on a ROC analysis involving 12 radiologists, and show a significant increase of diagnostic accuracy, rating confidence and a decrease in inter-expert variability. Building an anatomo-radiological correlation database is a complex and fastidious task, so that numerous studies base their evaluation analysis on the expertise of one experienced radiologist, which is thus doomed to contain uncertainties. We propose a new classification scheme, based on the support vector machine (SVM) algorithm, which is able to account for uncertain data during the learning step. The results obtained, both on toy examples and on our clinical database, demonstrate the potential of this new approach that can be extended to any machine learning problem relying on a probabilitic labelled dataset
APA, Harvard, Vancouver, ISO, and other styles
40

Dang, Hong-Phuong. "Approches bayésiennes non paramétriques et apprentissage de dictionnaire pour les problèmes inverses en traitement d'image." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0019/document.

Full text
Abstract:
L'apprentissage de dictionnaire pour la représentation parcimonieuse est bien connu dans le cadre de la résolution de problèmes inverses. Les méthodes d'optimisation et les approches paramétriques ont été particulièrement explorées. Ces méthodes rencontrent certaines limitations, notamment liées au choix de paramètres. En général, la taille de dictionnaire doit être fixée à l'avance et une connaissance des niveaux de bruit et éventuellement de parcimonie sont aussi nécessaires. Les contributions méthodologies de cette thèse concernent l'apprentissage conjoint du dictionnaire et de ces paramètres, notamment pour les problèmes inverses en traitement d'image. Nous étudions et proposons la méthode IBP-DL (Indien Buffet Process for Dictionary Learning) en utilisant une approche bayésienne non paramétrique. Une introduction sur les approches bayésiennes non paramétriques est présentée. Le processus de Dirichlet et son dérivé, le processus du restaurant chinois, ainsi que le processus Bêta et son dérivé, le processus du buffet indien, sont décrits. Le modèle proposé pour l'apprentissage de dictionnaire s'appuie sur un a priori de type Buffet Indien qui permet d'apprendre un dictionnaire de taille adaptative. Nous détaillons la méthode de Monte-Carlo proposée pour l'inférence. Le niveau de bruit et celui de la parcimonie sont aussi échantillonnés, de sorte qu'aucun réglage de paramètres n'est nécessaire en pratique. Des expériences numériques illustrent les performances de l'approche pour les problèmes du débruitage, de l'inpainting et de l'acquisition compressée. Les résultats sont comparés avec l'état de l'art.Le code source en Matlab et en C est mis à disposition
Dictionary learning for sparse representation has been widely advocated for solving inverse problems. Optimization methods and parametric approaches towards dictionary learning have been particularly explored. These methods meet some limitations, particularly related to the choice of parameters. In general, the dictionary size is fixed in advance, and sparsity or noise level may also be needed. In this thesis, we show how to perform jointly dictionary and parameter learning, with an emphasis on image processing. We propose and study the Indian Buffet Process for Dictionary Learning (IBP-DL) method, using a bayesian nonparametric approach.A primer on bayesian nonparametrics is first presented. Dirichlet and Beta processes and their respective derivatives, the Chinese restaurant and Indian Buffet processes are described. The proposed model for dictionary learning relies on an Indian Buffet prior, which permits to learn an adaptive size dictionary. The Monte-Carlo method for inference is detailed. Noise and sparsity levels are also inferred, so that in practice no parameter tuning is required. Numerical experiments illustrate the performances of the approach in different settings: image denoising, inpainting and compressed sensing. Results are compared with state-of-the art methods is made. Matlab and C sources are available for sake of reproducibility
APA, Harvard, Vancouver, ISO, and other styles
41

Grapa, Anca-Ioana. "Caractérisation des réseaux de fibronectine représentés par des graphes de fibres à partir d'images de microscopie confocale 2D." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4031.

Full text
Abstract:
La fibronectine (FN) cellulaire, composante majeure de la matrice extracellulaire, est organisée en réseaux fibrillaires de maniéré différente suivant les deux extra-domaines EDB et EDA. Notre objectif a été le développement de biomarqueurs quantitatifs pour caractériser l'organisation géométrique des quatre variants de FN à partir d'images de microscopie confocale 2D, puis de comparer les tissus sains et cancéreux. Premièrement, nous avons montré à travers deux pipelines de classification fondés sur les curvelets et sur l'apprentissage profond, que les variants peuvent être distingués avec une performance similaire à celle d'un annotateur humain. Nous avons ensuite construit une représentation des fibres (détectées avec des filtres Gabor) fondée sur des graphes. Les variantes ont été classés en utilisant des attributs spécifiques aux graphes, prouvant que ceux-ci intègrent des informations pertinentes dans les images confocales. De plus, nous avons identifié différentes techniques capables de différencier les graphes, afin de comparer les variants de FN quantitativement et qualitativement. Une analyse des performances sur des exemples simples a montré la capacité des méthodes fondées sur l'appariement de graphes et le transport optimal, de comparer les graphes. Nous avons ensuite proposé différentes méthodologies pour définir le graphe représentatif d'une certaine classe. De plus, l'appariement de graphes nous a permis de calculer des cartes de déformation des paramètres entre tissus sains et cancéreux. Ces cartes ont ensuite été analysées dans un cadre statistique montrant si la variation du paramètre peut être expliquée ou non par la variance au sein d'une même classe
A major constituent of the Extracellular Matrix is a large protein called the Fibronectin (FN). Cellular FN is organized in fibrillar networks and can be assembled differently in the presence of two Extra Domains, EDA and EDB. Our objective was to develop numerical quantitative biomarkers to characterize the geometrical organization of the four FN variants (that differ by the inclusion/exclusion of EDA/EDB) from 2D confocal microscopy images, and to compare sane and cancerous tissues. First, we showed through two classification pipelines, based on curvelet features and deep learning framework, that the FN variants can be distinguished with a similar performance to that of a human annotator. We constructed a graph-based representation of the fibers, which were detected using Gabor filters. Graphspecific attributes were employed to classify the variants, proving that the graph representation embeds relevant information from the confocal images. Furthermore, we identified various techniques capable to differentiate the graphs, allowing us to compare the FN variants quantitatively and qualitatively. Performance analysis using toy graphs showed that the methods, which are based on graph matching and optimal transport, can meaningfully compare graphs. Using the graph-matching framework, we proposed different methodologies for defining the prototype graph, representative of a certain FN class. Additionally, the graph matching served as a tool to compute parameter deformation maps between the variants. These deformation maps were analyzed in a statistical framework showing whether or not the variation of the parameters can be explained by the variance within the same class
APA, Harvard, Vancouver, ISO, and other styles
42

Knefati, Muhammad Anas. "Estimation non-paramétrique du quantile conditionnel et apprentissage semi-paramétrique : applications en assurance et actuariat." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2280/document.

Full text
Abstract:
La thèse se compose de deux parties : une partie consacrée à l'estimation des quantiles conditionnels et une autre à l'apprentissage supervisé. La partie "Estimation des quantiles conditionnels" est organisée en 3 chapitres : Le chapitre 1 est consacré à une introduction sur la régression linéaire locale, présentant les méthodes les plus utilisées, pour estimer le paramètre de lissage. Le chapitre 2 traite des méthodes existantes d’estimation nonparamétriques du quantile conditionnel ; Ces méthodes sont comparées, au moyen d’expériences numériques sur des données simulées et des données réelles. Le chapitre 3 est consacré à un nouvel estimateur du quantile conditionnel et que nous proposons ; Cet estimateur repose sur l'utilisation d'un noyau asymétrique en x. Sous certaines hypothèses, notre estimateur s'avère plus performant que les estimateurs usuels. La partie "Apprentissage supervisé" est, elle aussi, composée de 3 chapitres : Le chapitre 4 est une introduction à l’apprentissage statistique et les notions de base utilisées, dans cette partie. Le chapitre 5 est une revue des méthodes conventionnelles de classification supervisée. Le chapitre 6 est consacré au transfert d'un modèle d'apprentissage semi-paramétrique. La performance de cette méthode est montrée par des expériences numériques sur des données morphométriques et des données de credit-scoring
The thesis consists of two parts: One part is about the estimation of conditional quantiles and the other is about supervised learning. The "conditional quantile estimate" part is organized into 3 chapters. Chapter 1 is devoted to an introduction to the local linear regression and then goes on to present the methods, the most used in the literature to estimate the smoothing parameter. Chapter 2 addresses the nonparametric estimation methods of conditional quantile and then gives numerical experiments on simulated data and real data. Chapter 3 is devoted to a new conditional quantile estimator, we propose. This estimator is based on the use of asymmetrical kernels w.r.t. x. We show, under some hypothesis, that this new estimator is more efficient than the other estimators already used. The "supervised learning" part is, too, with 3 chapters: Chapter 4 provides an introduction to statistical learning, remembering the basic concepts used in this part. Chapter 5 discusses the conventional methods of supervised classification. Chapter 6 is devoted to propose a method of transferring a semiparametric model. The performance of this method is shown by numerical experiments on morphometric data and credit-scoring data
APA, Harvard, Vancouver, ISO, and other styles
43

Hanna, Hilding. "Experiences of learning, development and preparedness for clinical practice among undergraduate paramedicine students, graduate/intern paramedics and their preceptors: a qualitative systematic review." Thesis, 2021. http://hdl.handle.net/2440/130768.

Full text
Abstract:
Objective This systematic review aims to identify and explore the barriers to and facilitators of learning and preparedness for clinical practice among undergraduate paramedicine students, graduate/intern paramedics and their preceptors. Introduction The educational landscape for paramedicine has evolved considerably since the introduction of the first paramedicine Bachelor’s degree. A need to identify the contemporary barriers to and facilitators of learning within the context of early career training in paramedicine education is needed. Inclusion criteria Participants were undergraduate paramedicine students, graduate/intern paramedics, newly qualified UK paramedics and their preceptors, within Australia, the UK and New Zealand. Published and unpublished English studies utilizing qualitative research designs were considered. Methods Five bibliographic databases (PubMed, CINAHL, ERIC, Embase and ProQuest dissertations and theses) were searched in 2018. Websites relevant to paramedic learning and a hand search of paramedicine journals (2019) were also undertaken. All studies identified from the search were examined against the inclusion criteria. Papers selected for inclusion were assessed by two independent reviewers for methodological quality prior to inclusion in the review. Qualitative research findings were extracted and pooled. Findings were assembled and categorized based on similarity in meaning. These categories were then subjected to a meta-synthesis in order to produce a single, comprehensive set of synthesized findings. Results Twenty-six studies were included in the review: eleven studies used semi-structured interviews, five used open-ended interviews and ten used focus groups, with a total sample size of 564 participants. Sixteen studies focussed on undergraduate paramedicine students, four involved paramedic preceptors, two focused on paramedic educators at paramedicine universities, and four included undergraduate paramedicine students and their preceptors. A total of 295 findings were extracted and grouped into twenty-eight categories. Categories were grouped into five synthesised findings as follows; • The role of mentoring/preceptorship • Opportunities to develop emotional intelligence and communication skills • The role of non-traditional placements/experiences • The role of non-traditional classroom teaching methods • Preparedness for practice Conclusions A variety of learning models exist with barriers and facilitators that impact on paramedicine students, graduate paramedics, and preceptors. The findings emphasize the importance of a preceptor to student learning; and the need to develop paramedicine students’ skills/capacity in dealing with the emotional side of paramedic practice. Paramedicine students and paramedic graduates were found to be underprepared to communicate effectively with patients, families and other professionals. Most of these barriers could be mitigated by the utilization of non-traditional placements/experiences and with the use of non-traditional teaching methods. The introduction of a paramedic facilitator model was shown to have considerable benefits, suggesting that the introduction of a national model, similar to that of other allied health models, may be beneficial. The findings indicate a need for more effective communication between the education sector and industry in relation to the challenges that currently exist in paramedicine education and what models appear to facilitate learning, development and preparedness for clinical practice.
Thesis (MPhil) -- University of Adelaide, Adelaide Medical School, 2020
APA, Harvard, Vancouver, ISO, and other styles
44

Essington, Timothy Don. "Learning in simulation: theorizing Ricoeur in a study involving paramedics, pilots, and others." Phd thesis, 2010. http://hdl.handle.net/10048/1302.

Full text
Abstract:
The use of simulation is becoming increasingly important in the education of practitioners whose field of work contains a low tolerance for error. In aerospace, aviation, medicine, paramedicine, and the military, simulations are expected to provide working practitioners with on demand experience. However, the ways in which learning emerges out of simulation have been poorly understood. This research provides insight into the processes of learning that are generated and the forms of knowledge that arise out of learning endeavors based upon the use of simulation. This study employed a form of naturalistic inquiry. Eight individuals from seven domains of work were extensively interviewed regarding their simulation experience. Conceptually, the methods are premised upon Pattons (2002) understanding of qualitative inquiry, Van Manens (1997) phenomenological approach to lived experience, and Ricoeurs hermeneutical approach to the interpretation of the text. Ricoeurs (1986) conceptualization of ideology and utopia as a dialectic which comprises the social imaginary and Kearneys (2003) analysis of the Other inform the analysis. It is the central finding of this study that experience in simulation is consistently interpreted to be both real and an imagination of the real. Experiential learning has at least five dimensions: purpose, interpretation, engagement, self, and context (Fenwick, 2003) all of which are affected in the pedagogical activity of simulation. The learning that emerges out of simulation always involves the social imaginary. Simulation forces an engagement with the symbolic nature of the social imaginary, and it is because a specific aspect of the social imaginary is reproduced in simulation that a need for interpretation is provoked and learning occurs. This study is theoretically significant because it adds to the academic literature through an improved understanding of simulation as a complex entanglement of the real and the imaginary. Practical significance lies in understanding the effective use of simulation as a pedagogical tool which can inform or reify the existing dimensions of experiential learning. Overall, the study contributes to our knowledge about how learning emerges out of simulation and how simulation fosters such an emergence.
Adult Education
APA, Harvard, Vancouver, ISO, and other styles
45

Van, Nugteren Benjamin Simon. "Out-of-hospital critical case time intervals occuring in the Greater Johannesburg Metropolitan area, Gauteng, as recorded in a paramedic clinical learning database." Thesis, 2015. http://hdl.handle.net/10539/18508.

Full text
Abstract:
Background. Out-of-hospital time intervals are often used to assess Emergency Medical Service (EMS) system performance. In addition, these time intervals are linked to patient outcome in certain time-dependent pathologies such as stroke, out-of-hospital cardiac arrest (OHCA) and myocardial infarction. There are a number of variables that are thought to influence these time intervals such as the number of interventions performed and the transport distance to hospital. Objective. This Johannesburg-based study assessed out-of-hospital critical case time intervals as recorded in a paramedic student clinical learning database. Methods. This retrospective study analysed 19742 cases that were attended to by paramedic students and their clinical supervisors. Of the total number of cases, 1360 critical cases were deemed to meet inclusion criteria in the Greater Johannesburg Metropolitan (GJM) area over the eight-year period under review. Results. Eight hundred and fifty six “trauma” cases and 504 “medical” cases were analysed. The mean response time interval was 10.67 minutes (95% CI:10.48;10.86). Of the critical cases assessed, the mean on-scene time interval was 26.69 minutes (95% CI:26.23;27.15). Generally, critical cases in Johannesburg had longer total incident time intervals (53.53 minutes 95% CI:52.90;54.15) when compared to international data. Conclusions. This study found that when compared to international trends, patients who are critically-ill locally experience similar response time intervals when compared to certain data. On-scene time intervals are comparatively extended. In addition, it was also found that in increase in the number of on-scene interventions led to an increase in on-scene time intervals.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Chunping. "Non-parametric Bayesian Learning with Incomplete Data." Diss., 2010. http://hdl.handle.net/10161/3075.

Full text
Abstract:

In most machine learning approaches, it is usually assumed that data are complete. When data are partially missing due to various reasons, for example, the failure of a subset of sensors, image corruption or inadequate medical measurements, many learning methods designed for complete data cannot be directly applied. In this dissertation we treat two kinds of problems with incomplete data using non-parametric Bayesian approaches: classification with incomplete features and analysis of low-rank matrices with missing entries.

Incomplete data in classification problems are handled by assuming input features to be generated from a mixture-of-experts model, with each individual expert (classifier) defined by a local Gaussian in feature space. With a linear classifier associated with each Gaussian component, nonlinear classification boundaries are achievable without the introduction of kernels. Within the proposed model, the number of components is theoretically ``infinite'' as defined by a Dirichlet process construction, with the actual number of mixture components (experts) needed inferred based upon the data under test. With a higher-level DP we further extend the classifier for analysis of multiple related tasks (multi-task learning), where model components may be shared across tasks. Available data could be augmented by this way of information transfer even when tasks are only similar in some local regions of feature space, which is particularly critical for cases with scarce incomplete training samples from each task. The proposed algorithms are implemented using efficient variational Bayesian inference and robust performance is demonstrated on synthetic data, benchmark data sets, and real data with natural missing values.

Another scenario of interest is to complete a data matrix with entries missing. The recovery of missing matrix entries is not possible without additional assumptions on the matrix under test, and here we employ the common assumption that the matrix is low-rank. Unlike methods with a preset fixed rank, we propose a non-parametric Bayesian alternative based on the singular value decomposition (SVD), where missing entries are handled naturally, and the number of underlying factors is imposed to be small and inferred in the light of observed entries. Although we assume missing at random, the proposed model is generalized to incorporate auxiliary information including missingness features. We also make a first attempt in the matrix-completion community to acquire new entries actively. By introducing a probit link function, we are able to handle counting matrices with the decomposed low-rank matrices latent. The basic model and its extensions are validated on

synthetic data, a movie-rating benchmark and a new data set presented for the first time.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
47

"Statistical Parametric Speech Synthesis using Deep Learning Architectures." 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292251.

Full text
Abstract:
本文研究了使用深度學習(Deep Learning)技術與模型的統計參數化語音合成(Statistical Parametric Speech Synthesis)框架。當前語音合成面臨的兩個主要的挑戰在於:採用聲學實現表達語音韻律的複雜度;訓練數據的稀疏性。這兩個問題很大地影響了合成語音的自然度。本文嘗試採用深度學習結構的建模能力,提高合成語音的語音自然度。
為了更精確地表示韻律上下文,本文定義了層次韻律結構,用以組織音段與超音段特征。本文採用深度學習結構,運用層次化結構的音節級別表示,構建語音合成系統。
受深度置信網絡(Deep Belief Network, DBN)在手寫數字圖像識別和生成方面成功應用的啟發,本文在DBN的框架下對語音頻譜與基頻進行建模。為了適應語音韻律與聲學參數數據包含不同分佈的特點,本文改進原有的DBN成為帶權重的多分佈深度置信網絡(Weighted Multi-Distribution Deep Belief Network, wMD-DBN)。與傳統的基於隱馬爾科夫(Hidden Markov Model, HMM)的方法相比,客觀評測中wMD-DBN生成的頻譜失真度更低,在主觀評測中,wMDDBN也得到了與HMM基線系統整體相似的結果,證明了wMD-DBN的優勢。
在語音研究領域,之前的深度神經網絡(DNN)工作主要集中在語音識別任務中,採用DNN作為分類器以得到更好的聲學模型。本文將DNN作為生成模型,并使用它在語音合成中做韻律特征到聲學特征的映射。另一方面,DNN建模的是條件概率,而不像DBN建模的是聯合概率,這使得特征映射更加符合直觀感覺。與wMD-DBN相似,本文在DNN的輸出層採用了多分佈的輸出層。本文同時為具有不尋常分佈的聲學特征設計了特殊的損失函數(Loss Function)。為了使模型得到好的效果,本文採用生成式預訓練的DBN作為模型初始化,以構建多分佈深度神經網絡(MD-DNN)結構。主觀與客觀評測顯示,MD-DNN模型比wMD-DBN和HMM模型合成的語音具有更高的自然度。
This thesis presents a statistical parametric speech synthesis framework using the deep learning techniques and models. Existing speech synthesis face two main challenges – the complexity of expressing speech prosody with its acoustic realizations and sparsity of training data. Both of them limit the naturalness of synthesized speech. This thesis attempts to improve the synthesis performance in terms of speech naturalness, by leveraging the modeling power of deep learningarchitectures.
To precisely represent the linguistic contexts, we defined a hierarchical prosodic structure to organize both the segmental and suprasegmental features, and proposed a syllable-level representation of the hierarchical structure for speech synthesis using deep learning architectures.
Inspired by Deep Belief Network’s (DBN’s) success in handwriting digit im age recognition and generation, we propose to model the speech spectrograms in addition to F0 contours as 2-D images in the DBN framework. In order to fit the speech prosodic and acoustic parameters consisting of data with various distributions, we adopt the original model into a Weighted Multi-Distribution DBN(wMD-DBN). Compared with the predominant HMM-based approach, objective evaluation shows that the spectrum generated from wMD-DBN has less distortion. Subjective tests also confirm the advantage of spectrum from wMD-DBN, and the wMD-DBN system gives a similar overall quality as the HMM baseline.
Previous work on DNN in the speech community mainly focused on using it as a classifier for better acoustic modeling in speech recognition task. Here we treat DNN as a generative model and use it for linguistic-to-acoustic feature mapping in speech synthesis. Compared to the DBN model, DNN only requires a single computing pass for feature prediction, making it more suitable for real-time synthesis. On the other hand, DNN models the conditional probability instead of the joint probability as in the DBN model, which is more intuitive for the feature mapping task. Similar as wMD-DBN, we adopt the output layer of a plain DNN into Multi-distribution (MD) output layer. We also design specialized loss functions for acoustic features with uncommon distributions. To achieve good performance on deep model structure, we use the generative pre-trained DBN as the model initialization to build the MD-DNN architecture. Both objective and subjective evaluations show that the MD-DNN model out-performs the wMD DBN and HMM in terms of the naturalness of synthesized speech.
Kang, Shiyin.
Thesis Ph.D. Chinese University of Hong Kong 2016.
Includes bibliographical references (leaves ).
Abstracts also in Chinese.
Title from PDF title page (viewed on …).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
APA, Harvard, Vancouver, ISO, and other styles
48

Castro, Rui M. "Active learning and adaptive sampling for non-parametric inference." Thesis, 2008. http://hdl.handle.net/1911/22265.

Full text
Abstract:
This thesis presents a general discussion of active learning and adaptive sampling. In many practical scenarios it is possible to use information gleaned from previous observations to focus the sampling process, in the spirit of the "twenty-questions" game. As more samples are collected one can learn how to improve the sampling process by deciding where to sample next, for example. These sampling feedback techniques are generically known as active learning or adaptive sampling. Although appealing, analysis of such methodologies is difficult, since there are strong dependencies between the observed data. This is especially important in the presence of measurement uncertainty or noise. The main thrust of this thesis is to characterize the potential and fundamental limitations of active learning, particularly in non-parametric settings. First, we consider the probabilistic classification setting. Using minimax analysis techniques we investigate the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions (which describe the observation noise near the decision boundary). The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for "boundary fragment" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below. Second we study the problem of estimating an unknown function from noisy point-wise samples, where the sample locations are adaptively chosen based on previous samples and observations, as described above. We present results characterizing the potential and fundamental limits of active learning for certain classes of nonparametric regression problems, and also present practical algorithms capable of exploiting the sampling adaptivity and provably improving upon non-adaptive techniques. Our active sampling procedure is based on a novel coarse-to-fine strategy, based on and motivated by the success of spatially-adaptive methods such as wavelet analysis in nonparametric function estimation. Using the ideas developed when solving the function regression problem we present a greedy algorithm for estimating piecewise constant functions with smooth boundaries that is near minimax optimal but is computationally much more efficient than the best dictionary based method (in this case wedgelet approximations). Finally we compare adaptive sampling (where feedback guiding the sampling process is present) with non-adaptive compressive sampling (where non-traditional projection samples are used). It is shown that under mild noise compressive sampling can be competitive with adaptive sampling, but adaptive sampling significantly outperforms compressive sampling in lower signal-to-noise conditions. Furthermore this work also helps the understanding of the different behavior of compressive sampling under noisy and noiseless settings.
APA, Harvard, Vancouver, ISO, and other styles
49

De, La Garza John A. "A Paramedic's Story: An Autoethnography of Chaos and Quest." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9802.

Full text
Abstract:
This research study represents a personalized account of my experiences as a San Antonio Fire Department (SAFD) paramedic. In this study I bring the reader closer to the subculture of the Emergency Medical Services (EMS) through the research methodology of autoethnography. This qualitative method allows me to be researcher, subject, and narrator of the study. Autoethnography requires considerable attention to reflection, introspection, and self-analysis through the use of the narrative. Written in first person voice, I am positioned in the narrative in a manner that allows me to communicate directly with the audience. Through an insider’s perspective, I have traced the time I spent in EMS by reflecting, interpreting, and analyzing a collection of epochal events that significantly impacted my life both personally and professionally. There are five themes that I have identified as salient to the meaning-making process of the study: (a) death and dying, (b) faith and spirituality, (c) job burnout, (d) dealing and coping with job-related stress, and (e) alcohol abuse. The events that I have selected for this study may be read and interpreted as a prelude to what is a much broader narrative of my tenure in EMS and of other emergency responders’ experiences as well. The study explores how my life was impacted beyond the immediate experience and how the story continues to evolve to the present day. The study establishes a foundation for designing training programs to be used by public safety educators. Three theoretical elements of adult learning that help inform professional education strategies for emergency responders have been identified: (a) experiential, (b) narrative, and (c) transformative learning. The study also sensitizes the general public to the physical, social, and psychological demands that are placed on paramedics. It is important for the reader to know that these public servants are ordinary human beings doing extraordinary work in one of the most stressful and hazardous professions in the world.
APA, Harvard, Vancouver, ISO, and other styles
50

Shin, Young-in. "Parametric kernels for structured data analysis." Thesis, 2008. http://hdl.handle.net/2152/29669.

Full text
Abstract:
Structured representation of input physical patterns as a set of local features has been useful for a veriety of robotics and human computer interaction (HCI) applications. It enables a stable understanding of the variable inputs. However, this representation does not fit the conventional machine learning algorithms and distance metrics because they assume vector inputs. To learn from input patterns with variable structure is thus challenging. To address this problem, I propose a general and systematic method to design distance metrics between structured inputs that can be used in conventional learning algorithms. Based on the observation of the stability in the geometric distributions of local features over the physical patterns across similar inputs, this is done combining the local similarities and the conformity of the geometric relationship between local features. The produced distance metrics, called “parametric kernels”, are positive semi-definite and require almost linear time to compute. To demonstrate the general applicability and the efficacy of this approach, I designed and applied parametric kernels to handwritten character recognition, on-line face recognition, and object detection from laser range finder sensor data. Parametric kernels achieve recognition rates competitive to state-of-the-art approaches in these tasks.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography