Thèses sur le sujet « Progressive data analysis »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Progressive data analysis.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 25 meilleures thèses pour votre recherche sur le sujet « Progressive data analysis ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Morone, Daniel Justin Reese. « Progressive Collapse : Simplified Analysis Using Experimental Data ». The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354602937.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Larson, Michael Andrew. « A Progressive Refinement of Postural Human Balance Models Based on Experimental Data Using Topological Data Analysis ». Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami159620428141697.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Amrani, Naoufal, Joan Serra-Sagrista, Miguel Hernandez-Cabronero et Michael Marcellin. « Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data ». IEEE, 2016. http://hdl.handle.net/10150/623190.

Texte intégral
Résumé :
Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Silvaroli, Antonio. « Design and Analysis of Erasure Correcting Codes in Blockchain Data Availability Problems ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Trouver le texte intégral
Résumé :
In questo lavoro viene affrontato il concetto di Blockchain e Bitcoin, con enfasi sugli attacchi alla disponibilità riguardanti le transazioni, nel caso in cui nella rete vengano considerati alcuni nodi detti "light nodes", che migliorano la scalabilità del sistema. Quindi, si analizza il funzionamento della Blockchain di Bitcoin quando la struttura dati "Merkle Tree" viene codificata, in modo da aumentare la probabilità dei light nodes di rilevare cancellazioni di transazioni, attuate da parte di nodi attaccanti. Attraverso una codifica con erasure codes, in particolare con codici low density parity check (LDPC), si riesce ad aumentare la probabilità di detection di una cancellazione e, grazie alla decodifica iterativa è possibile recuperare tale cancellazione. Viene affrontato il problema degli stopping sets, cioè quelle strutture che impediscono il recupero dei dati tramite decodifica iterativa e si progetta un algoritmo per l'enumerazione di tali strutture. Sono poi testate, in modo empirico, alcune soluzioni teoriche presenti in letteratura. Successivamente vengono progettati nuovi codici, seguendo un metodo di design diverso da quello presente in letteratura. Tali codici comportano un miglioramento delle performance, in quanto il minimo stopping set per tali codici risulta più alto di quello di codici già analizzati. In questo modo eventuali attacchi alla disponibilità risultano, in probabilità, più difficili. Come conseguenza, il throughput della rete risulta più stabile dato che, con minori attacchi che vanno a buon fine, la frequenza di generazione di nuovi codici, per un nuovo processo di codifica delle transazioni, tende ad essere più bassa. Infine vengono proposti dei possibili miglioramenti.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Eller, Michael R. « Utilizing Economic and Environmental Data from the Desalination Industry as a Progressive Approach to Ocean Thermal Energy Conversion (OTEC) Commercialization ». ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1733.

Texte intégral
Résumé :
Ocean Thermal Energy Conversion (OTEC) is a renewable energy technology that has to overcome several key challenges before achieving its ultimate goal of producing baseload power on a commercial scale. The economic challenge of deploying an OTEC plant remains the biggest barrier to implementation. Although small OTEC demonstration plants and recent advances in subsystem technologies have proven OTEC’s technical merits, the process still lacks the crucial operational data required to justify investments in large commercial OTEC plants on the order of 50-100 megawatts of net electrical power (MWe-net). A pre-commercial pilot plant on the order of 5-10 MWe-net is required for an OTEC market to evolve. In addition to the economic challenge,OTEC plants have potential for adverse environmental impacts from redistribution of nutrients and residual chemicals in the discharge plume. Although long-term operational records are not available for commercial sizeOTEC plants, synergistic operational data can be leveraged from the desalination industry to improve the potential for OTEC commercialization. Large capacity desalination plants primarily use membranes or thermal evaporator tubes to transform enormous amounts of seawater into freshwater. Thermal desalination plants in particular possess many of the same technical, economic, and environmental traits as a commercial scale OTEC plant. Substantial long-term economic data and environmental impact results are now widely available since commercial desalination began in the 1950s. Analysis of this data indicates that the evolution of the desalination industry could be akin to the potential future advancement of OTEC. Furthermore, certain scenarios exist where a combined OTEC-desalination plant provides a new opportunity for commercial plants. This paper seeks to utilize operational data from the desalination industry as a progressive approach towards OTEC commercialization.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vidal, Jules. « Progressivité en analyse topologique de données ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS398.

Texte intégral
Résumé :
L’analyse topologique de donnés permet l’extraction générique et efficace de caractéristiques structurelles dans les données. Cependant, bien que ces techniques aient des complexités asymptotiques connues et raisonnables elles sont rarement interactives en pratique sur des jeux de données réels. Dans cette thèse, nous avons cherché à développer des méthodes progressives pour l’analyse topologique de données scalaires scientifiques, qui peuvent être interrompues pour fournir rapidement un résultat approché exploitable, et sont capables de l’affiner ensuite. Nous introduisons deux algorithmes progressifs pour le calcul des points critiques et du diagramme de persistance d’un champ scalaire. Ensuite, nous revisitons ce cadre progressif pour introduire un algorithme pour le calcul approché du diagramme de persistance d’un champ scalaire, avec des garanties sur l’erreur d’approximation associée. Enfin, afin d’effectuer une analyse visuelle de données d’ensemble, nous présentons un nouvel algorithme progressif pour le calcul du barycentre de Wasserstein d’un ensemble de diagrammes de persistance, une tâche notoirement coûteuse en calcul. Notre approche progressive permet d’approcher le barycentre de manière interactive. Nous étendons cette méthode à un algorithme de classification topologique de données d’ensemble, qui est progressif et capable de respecter une contrainte de temps
Topological Data Analysis (TDA) forms a collection of tools that enable the generic and efficient extraction of features in data. However, although most TDA algorithms have practicable asymptotic complexities, these methods are rarely interactive on real-life datasets, which limits their usability for interactive data analysis and visualization. In this thesis, we aimed at developing progressive methods for the TDA of scientific scalar data, that can be interrupted to swiftly provide a meaningful approximate output and that are able to refine it otherwise. First, we introduce two progressive algorithms for the computation of the critical points and the extremum-saddle persistence diagram of a scalar field. Next, we revisit this progressive framework to introduce an approximation algorithm for the persistence diagram of a scalar field, with strong guarantees on the related approximation error. Finally, in a effort to perform visual analysis of ensemble data, we present a novel progressive algorithm for the computation of the discrete Wasserstein barycenter of a set of persistence diagrams, a notoriously computationally intensive task. Our progressive approach enables the approximation of the barycenter within interactive times. We extend this method to a progressive, time-constraint, topological ensemble clustering algorithm
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kmetzsch, Virgilio. « Multimodal analysis of neuroimaging and transcriptomic data in genetic frontotemporal dementia ». Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS279.pdf.

Texte intégral
Résumé :
La démence frontotemporale (DFT) représente le deuxième type de démence le plus fréquent chez les adultes de moins de 65 ans. Il n’existe aucun traitement capable de guérir cette maladie. Dans ce contexte, il est essentiel d’identifier des biomarqueurs capables d’évaluer la progression de la maladie. Cette thèse a deux objectifs. Premièrement, analyser les profils d’expression des microARNs circulants prélevés dans le plasma sanguin de participants, afin d’identifier si l’expression de certains microARNs est corrélée au statut mutationnel et à la progression de la maladie. Deuxièmement, proposer des méthodes pour intégrer des données transversales de type microARN et de neuroimagerie pour estimer la progression de la maladie. Nous avons mené trois études. D’abord, nous avons analysé des échantillons de plasma provenant de porteurs d’une expansion dans le gène C9orf72. Ensuite, nous avons testé toutes les signatures de microARNs identifiées dans la littérature comme biomarqueurs potentiels de la DFT ou de la sclérose latérale amyotrophique (SLA), dans deux cohortes indépendantes. Enfin, dans notre troisième étude, nous avons proposé une nouvelle méthode, utilisant un autoencodeur variationnel multimodal supervisé, qui estime à partir d’échantillons de petite taille un score de progression de la maladie en fonction de données transversales d’expression de microARNs et de neuroimagerie. Les travaux menés dans cette thèse interdisciplinaire ont montré qu’il est possible d’utiliser des biomarqueurs non invasifs, tels que les microARNs circulants et l’imagerie par résonance magnétique, pour évaluer la progression de maladies neurodégénératives rares telles que la DFT et la SLA
Frontotemporal dementia (FTD) represents the second most common type of dementia in adults under the age of 65. Currently, there are no treatments that can cure this condition. In this context, it is essential that biomarkers capable of assessing disease progression are identified. This thesis has two objectives. First, to analyze the expression patterns of microRNAs taken from blood samples of patients, asymptomatic individuals who have certain genetic mutations causing FTD, and controls, to identify whether the expressions of some microRNAs correlate with mutation status and disease progression. Second, this work aims at proposing methods for integrating cross-sectional data from microRNAs and neuroimaging to estimate disease progression. We conducted three studies. Initially, we focused on plasma samples from C9orf72 expansion carriers. We identified four microRNAs whose expressions correlated with the clinical status of the participants. Next, we tested all microRNA signatures identified in the literature as potential biomarkers of FTD or amyotrophic lateral sclerosis (ALS), in two groups of individuals. Finally, in our third work, we proposed a new approach, using a supervised multimodal variational autoencoder, that estimates a disease progression score from cross-sectional microRNA expression and neuroimaging datasets with small sample sizes. The work conducted in this interdisciplinary thesis showed that it is possible to use non-invasive biomarkers, such as circulating microRNAs and magnetic resonance imaging, to assess the progression of rare neurodegenerative diseases such as FTD and ALS
Styles APA, Harvard, Vancouver, ISO, etc.
8

Conway, Devon S. « Long-Term Benefits of Early Treatment in Multiple Sclerosis : An Investigation Utilizing a Novel Data Collection Technique ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307635721.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Baneshi, Mohammad Reza. « Statistical models in prognostic modelling with many skewed variables and missing data : a case study in breast cancer ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4191.

Texte intégral
Résumé :
Prognostic models have clinical appeal to aid therapeutic decision making. In the UK, the Nottingham Prognostic Index (NPI) has been used, for over two decades, to inform patient management. However, it has been commented that NPI is not capable of identifying a subgroup of patients with a prognosis so good that adjuvant therapy with potential harmful side effects can be withheld safely. Tissue Microarray Analysis (TMA) now makes possible measurement of biological tissue microarray features of frozen biopsies from breast cancer tumours. These give an insight to the biology of tumour and hence could have the potential to enhance prognostic modelling. I therefore wished to investigate whether biomarkers can add value to clinical predictors to provide improved prognostic stratification in terms of Recurrence Free Survival (RFS). However, there are very many biomarkers that could be measured, they usually exhibit skewed distribution and missing values are common. The statistical issues raised are thus number of variables being tested, form of the association, imputation of missing data, and assessment of the stability and internal validity of the model. Therefore the specific aim of this study was to develop and to demonstrate performance of statistical modelling techniques that will be useful in circumstances where there is a surfeit of explanatory variables and missing data; in particular to achieve useful and parsimonious models while guarding against instability and overfitting. I also sought to identify a subgroup of patients with a prognosis so good that a decision can be made to avoid adjuvant therapy. I aimed to provide statistically robust answers to a set of clinical question and develop strategies to be used in such data sets that would be useful and acceptable to clinicians. A unique data set of 401 Estrogen Receptor positive (ER+) tamoxifen treated breast cancer patients with measurement for a large panel of biomarkers (72 in total) was available. Taking a statistical approach, I applied a multi-faceted screening process to select a limited set of potentially informative variables and to detect the appropriate form of the association, followed by multiple imputations of missing data and bootstrapping. In comparison with the NPI, the final joint model derived assigned patients into more appropriate risk groups (14% of recurred and 4% of non-recurred cases). The actuarial 7-year RFS rate for patients in the lowest risk quartile was 95% (95% C.I.: 89%, 100%). To evaluate an alternative approach, biological knowledge was incorporated into the process of model development. Model building began with the use of biological expertise to divide the variables into substantive biomarker sets on the basis of presumed role in the pathway to cancer progression. For each biomarker family, an informative and parsimonious index was generated by combining family variables, to be offered to the final model as intermediate predictor. In comparison with NPI, patients into more appropriate risk groups (21% of recurred and 11% of non-recurred patients). This model identified a low-risk group with 7-year RFS rate at 98% (95% C.I.: 96%, 100%).
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ivarsson, Adam. « Expediting Gathering and Labeling of Data from Zebrafish Models of Tumor Progression and Metastasis Using Bespoke Software ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148691.

Texte intégral
Résumé :
In this paper I describe a set of algorithms used to partly automate the labeling and preparation of images of zebrafish embryos used as models of tumor progression and metastasis. These algorithms show promise for saving time for researchers using zebrafish in this way.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Kapya, David. « Technical and scale efficiency in Zambia's agro-progressing industry : a firm level data envelope analysis of the 2011/2012 manufacturing census ». Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/23415.

Texte intégral
Résumé :
The implementation of privatization and Structural Adjustment Programs in Zambia saw the contribution of manufacturing in GDP significantly reduce from 37.2 percent in 1992 to 8.2 percent in 2013. Efforts to revamp manufacturing have not delivered to expectations and the industrial base has continued to be smaller than it used to be in the 1970s and 1980s. This has raised serious questions about suitable industrialization policies not only for Zambia but for other African countries as well. This study examines the agro-processing industry with a view to establish whether it can drive the development of Zambia's manufacturing. We start by exploring the growth opportunities and highlighting the key sectors of comparative advantage. Thereafter, we apply the Data Envelopment Analysis algorithm to construct measures of technical and scale efficiency for a sample of 115 firms using the 2011/2012 Economic Census data. Finally, we examine the effect of firm attributes on the firm's technical and scale efficiency using the Tobit regression model. The results reveal that there are sufficient growth opportunities in Zambia's agro-processing industry, but the industry is highly inefficient. The average technical efficiency was 42.5 percent while scale efficiency was 81.7 percent. The study also shows that firm efficiency is affected by firm size, the size of the firm's market share, labour costs, and location of the firm.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Bône, Alexandre. « Learning adapted coordinate systems for the statistical analysis of anatomical shapes. Applications to Alzheimer's disease progression modeling ». Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS273.

Texte intégral
Résumé :
Cette thèse construit des systèmes de coordonnées pour formes, c'est-à-dire des espaces métriques de dimension finie où les formes sont représentées par des vecteurs. Construire de tels systèmes de coordonnées permet de faciliter l'analyse statistique de collections de formes. Notre motivation finale est de prédire et de sous-typer la maladie d'Alzheimer, en se basant notamment sur des marqueurs ainsi extraits de banques d'images médicales du cerveau. Même si de telles banques sont longitudinales, la variabilité qu’elles renferment reste principalement due à la variabilité inter-individuelle importante et normale du cerveau. La variabilité due à la progression d’altérations pathologiques est d'une amplitude beaucoup plus faible. L'objectif central de cette thèse est de développer un système de coordonnées adapté pour l'analyse statistique de banques de données de formes longitudinales, capable de dissocier ces deux sources de variabilité. Comme montré dans la littérature, le transport parallèle peut être exploité pour obtenir une telle dissociation, par exemple en définissant la notion d’exp-parallélisme sur une variété. Utiliser cet outil sur un espace de formes s'accompagne cependant de défis théoriques et calculatoires, relevés dans la première partie de cette thèse. Enfin, si en anatomie computationnelle les espaces de formes sont communément équipés d'une structure de variété, les classes de difféomorphismes sous-jacentes sont le plus souvent construites sans tenir compte des données étudiées. Le dernier objectif majeur de cette thèse est de construire des systèmes de coordonnées de déformations où le paramétrage de ces déformations est adapté aux données d'intérêt
This thesis aims to build coordinate systems for shapes i.e. finite-dimensional metric spaces where shapes are represented by vectors. The goal of building such coordinate systems is to allow and facilitate the statistical analysis of shape data sets. The end-game motivation of our work is to predict and sub-type Alzheimer’s disease, based in part on knowledge extracted from banks of brain medical images. Even if these data banks are longitudinal, their variability remains mostly due to the large and normal inter-individual variability of the brain. The variability due to the progression of pathological alterations is of much smaller amplitude. The central objective of this thesis is to develop a coordinate system adapted for the statistical analysis of longitudinal shape data sets, able to disentangle these two sources of variability. As shown in the literature, the parallel transport operator can be leveraged to achieve this desired disentanglement, for instance by defining the notion of exp-parallel curves on a manifold. Using this tool on shape spaces comes however with theoretical and computational challenges, tackled in the first part of this thesis. Finally, if shape spaces are commonly equipped with a manifold-like structure in the field of computational anatomy, the underlying classes of diffeomorphisms are however most often largely built and parameterized without taking into account the data at hand. The last major objective of this thesis is to build deformation-based coordinate systems where the parameterization of deformations is adapted to the data set of interest
Styles APA, Harvard, Vancouver, ISO, etc.
13

dePillis-Lindheim, Lydia. « Disease Correlation Model : Application to Cataract Incidence in the Presence of Diabetes ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/scripps_theses/294.

Texte intégral
Résumé :
Diabetes is a major risk factor for the development of cataract [3,14,20,22]. In this thesis, we create a model that allows us to understand the incidence of one disease in the context of another; in particular, cataract in the presence of diabetes. The World Health Organization's Vision 2020 blindness-prevention initiative administers surgeries to remove cataracts, the leading cause of blindness worldwide [24]. One of the geographic areas most impacted by cataract-related blindness is Sub-Saharan Africa. In order to plan the number of surgeries to administer, the World Health Organization uses data on cataract prevalence. However, an estimation of the incidence of cataract is more useful than prevalence data for the purpose of resource planning. In 2012, Dray and Williams developed a method for estimating incidence based on prevalence data [5]. Incidence estimates can be further refined by considering associated risk factors such as diabetes. We therefore extend the Dray and Williams model to include diabetes prevalence when calculating cataract incidence estimates. We explore two possible approaches to our model construction, one a detailed extension, and the other, a simplification of that extension. We provide a discussion comparing the two approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Şentürk, Sertan. « Computational analysis of audio recordings and music scores for the description and discovery of Ottoman-Turkish Makam music ». Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/402102.

Texte intégral
Résumé :
This thesis addresses several shortcomings on the current state of the art methodologies in music information retrieval (MIR). In particular, it proposes several computational approaches to automatically analyze and describe music scores and audio recordings of Ottoman-Turkish makam music (OTMM). The main contributions of the thesis are the music corpus that has been created to carry out the research and the audio-score alignment methodology developed for the analysis of the corpus. In addition, several novel computational analysis methodologies are presented in the context of common MIR tasks of relevance for OTMM. Some example tasks are predominant melody extraction, tonic identification, tempo estimation, makam recognition, tuning analysis, structural analysis and melodic progression analysis. These methodologies become a part of a complete system called Dunya-makam for the exploration of large corpora of OTMM. The thesis starts by presenting the created CompMusic Ottoman- Turkish makam music corpus. The corpus includes 2200 music scores, more than 6500 audio recordings, and accompanying metadata. The data has been collected, annotated and curated with the help of music experts. Using criteria such as completeness, coverage and quality, we validate the corpus and show its research potential. In fact, our corpus is the largest and most representative resource of OTMM that can be used for computational research. Several test datasets have also been created from the corpus to develop and evaluate the specific methodologies proposed for different computational tasks addressed in the thesis. The part focusing on the analysis of music scores is centered on phrase and section level structural analysis. Phrase boundaries are automatically identified using an existing state-of-the-art segmentation methodology. Section boundaries are extracted using heuristics specific to the formatting of the music scores. Subsequently, a novel method based on graph analysis is used to establish similarities across these structural elements in terms of melody and lyrics, and to label the relations semiotically. The audio analysis section of the thesis reviews the state-of-the-art for analysing the melodic aspects of performances of OTMM. It proposes adaptations of existing predominant melody extraction methods tailored to OTMM. It also presents improvements over pitch-distribution-based tonic identification and makam recognition methodologies. The audio-score alignment methodology is the core of the thesis. It addresses the culture-specific challenges posed by the musical characteristics, music theory related representations and oral praxis of OTMM. Based on several techniques such as subsequence dynamic time warping, Hough transform and variable-length Markov models, the audio-score alignment methodology is designed to handle the structural differences between music scores and audio recordings. The method is robust to the presence of non-notated melodic expressions, tempo deviations within the music performances, and differences in tonic and tuning. The methodology utilizes the outputs of the score and audio analysis, and links the audio and the symbolic data. In addition, the alignment methodology is used to obtain score-informed description of audio recordings. The scoreinformed audio analysis not only simplifies the audio feature extraction steps that would require sophisticated audio processing approaches, but also substantially improves the performance compared with results obtained from the state-of-the-art methods solely relying on audio data. The analysis methodologies presented in the thesis are applied to the CompMusic Ottoman-Turkish makam music corpus and integrated into a web application aimed at culture-aware music discovery. Some of the methodologies have already been applied to other music traditions such as Hindustani, Carnatic and Greek music. Following open research best practices, all the created data, software tools and analysis results are openly available. The methodologies, the tools and the corpus itself provide vast opportunities for future research in many fields such as music information retrieval, computational musicology and music education.
Esta tesis aborda varias limitaciones de las metodologías más avanzadas en el campo de recuperación de información musical (MIR por sus siglas en inglés). En particular, propone varios métodos computacionales para el análisis y la descripción automáticas de partituras y grabaciones de audio de música de makam turco-otomana (MMTO). Las principales contribuciones de la tesis son el corpus de música que ha sido creado para el desarrollo de la investigación y la metodología para alineamiento de audio y partitura desarrollada para el análisis del corpus. Además, se presentan varias metodologías nuevas para análisis computacional en el contexto de las tareas comunes de MIR que son relevantes para MMTO. Algunas de estas tareas son, por ejemplo, extracción de la melodía predominante, identificación de la tónica, estimación de tempo, reconocimiento de makam, análisis de afinación, análisis estructural y análisis de progresión melódica. Estas metodologías constituyen las partes de un sistema completo para la exploración de grandes corpus de MMTO llamado Dunya-makam. La tesis comienza presentando el corpus de música de makam turcootomana de CompMusic. El corpus incluye 2200 partituras, más de 6500 grabaciones de audio, y los metadatos correspondientes. Los datos han sido recopilados, anotados y revisados con la ayuda de expertos. Utilizando criterios como compleción, cobertura y calidad, validamos el corpus y mostramos su potencial para investigación. De hecho, nuestro corpus constituye el recurso de mayor tamaño y representatividad disponible para la investigación computacional de MMTO. Varios conjuntos de datos para experimentación han sido igualmente creados a partir del corpus, con el fin de desarrollar y evaluar las metodologías específicas propuestas para las diferentes tareas computacionales abordadas en la tesis. La parte dedicada al análisis de las partituras se centra en el análisis estructural a nivel de sección y de frase. Los márgenes de frase son identificados automáticamente usando uno de los métodos de segmentación existentes más avanzados. Los márgenes de sección son extraídos usando una heurística específica al formato de las partituras. A continuación, se emplea un método de nueva creación basado en análisis gráfico para establecer similitudes a través de estos elementos estructurales en cuanto a melodía y letra, así como para etiquetar relaciones semióticamente. La sección de análisis de audio de la tesis repasa el estado de la cuestión en cuanto a análisis de los aspectos melódicos en grabaciones de MMTO. Se proponen modificaciones de métodos existentes para extracción de melodía predominante para ajustarlas a MMTO. También se presentan mejoras de metodologías tanto para identificación de tónica basadas en distribución de alturas, como para reconocimiento de makam. La metodología para alineación de audio y partitura constituye el grueso de la tesis. Aborda los retos específicos de esta cultura según vienen determinados por las características musicales, las representaciones relacionadas con la teoría musical y la praxis oral de MMTO. Basada en varias técnicas tales como deformaciones dinámicas de tiempo subsecuentes, transformada de Hough y modelos de Markov de longitud variable, la metodología de alineamiento de audio y partitura está diseñada para tratar las diferencias estructurales entre partituras y grabaciones de audio. El método es robusto a la presencia de expresiones melódicas no anotadas, desviaciones de tiempo en las grabaciones, y diferencias de tónica y afinación. La metodología utiliza los resultados del análisis de partitura y audio para enlazar el audio y los datos simbólicos. Además, la metodología de alineación se usa para obtener una descripción informada por partitura de las grabaciones de audio. El análisis de audio informado por partitura no sólo simplifica los pasos para la extracción de características de audio que de otro modo requerirían sofisticados métodos de procesado de audio, sino que también mejora sustancialmente su rendimiento en comparación con los resultados obtenidos por los métodos más avanzados basados únicamente en datos de audio. Las metodologías analíticas presentadas en la tesis son aplicadas al corpus de música de makam turco-otomana de CompMusic e integradas en una aplicación web dedicada al descubrimiento culturalmente específico de música. Algunas de las metodologías ya han sido aplicadas a otras tradiciones musicales, como música indostaní, carnática y griega. Siguiendo las mejores prácticas de investigación en abierto, todos los datos creados, las herramientas de software y los resultados de análisis está disponibles públicamente. Las metodologías, las herramientas y el corpus en sí mismo ofrecen grandes oportunidades para investigaciones futuras en muchos campos tales como recuperación de información musical, musicología computacional y educación musical.
Aquesta tesi adreça diverses deficiències en l’estat actual de les metodologies d’extracció d’informació de música (Music Information Retrieval o MIR). En particular, la tesi proposa diverses estratègies per analitzar i descriure automàticament partitures musicals i enregistraments d’actuacions musicals de música Makam Turca Otomana (OTMM en les seves sigles en anglès). Les contribucions principals de la tesi són els corpus musicals que s’han creat en el context de la tesi per tal de dur a terme la recerca i la metodologia de alineament d’àudio amb la partitura que s’ha desenvolupat per tal d’analitzar els corpus. A més la tesi presenta diverses noves metodologies d’anàlisi computacional d’OTMM per a les tasques més habituals en MIR. Alguns exemples d’aquestes tasques són la extracció de la melodia principal, la identificació del to musical, l’estimació de tempo, el reconeixement de Makam, l’anàlisi de la afinació, l’anàlisi de la estructura musical i l’anàlisi de la progressió melòdica. Aquest seguit de metodologies formen part del sistema Dunya-makam per a la exploració de grans corpus musicals d’OTMM. En primer lloc, la tesi presenta el corpus CompMusic Ottoman- Turkish makam music. Aquest inclou 2200 partitures musicals, més de 6500 enregistraments d’àudio i metadata complementària. Les dades han sigut recopilades i anotades amb ajuda d’experts en aquest repertori musical. El corpus ha estat validat en termes de d’exhaustivitat, cobertura i qualitat i mostrem aquí el seu potencial per a la recerca. De fet, aquest corpus és el la font més gran i representativa de OTMM que pot ser utilitzada per recerca computacional. També s’han desenvolupat diversos subconjunts de dades per al desenvolupament i evaluació de les metodologies específiques proposades per a les diverses tasques computacionals que es presenten en aquest tesi. La secció de la tesi que tracta de l’anàlisi de partitures musicals se centra en l’anàlisi estructural a nivell de secció i de frase musical. Els límits temporals de les frases musicals s’identifiquen automàticament gràcies a un metodologia de segmentació d’última generació. Els límits de les seccions s’extreuen utilitzant un seguit de regles heurístiques determinades pel format de les partitures musicals. Posteriorment s’utilitza un nou mètode basat en anàlisi gràfic per establir semblances entre aquest elements estructurals en termes de melodia i text. També s’utilitza aquest mètode per etiquetar les relacions semiòtiques existents. La següent secció de la tesi tracta sobre anàlisi d’àudio i en particular revisa les tecnologies d’avantguardia d’anàlisi dels aspectes melòdics en OTMM. S’hi proposen adaptacions dels mètodes d’extracció de melodia existents que s’ajusten a OTMM. També s’hi presenten millores en metodologies de reconeixement de makam i en identificació de tònica basats en distribució de to. La metodologia d’alineament d’àudio amb partitura és el nucli de la tesi. Aquesta aborda els reptes culturalment específics imposats per les característiques musicals, les representacions de la teoria musical i la pràctica oral particulars de l’OTMM. Utilitzant diverses tècniques tal i com Dynamic Time Warping, Hough Transform o models de Markov de durada variable, la metodologia d’alineament esta dissenyada per enfrontar les diferències estructurals entre partitures musicals i enregistraments d’àudio. El mètode és robust inclús en presència d’expressions musicals no anotades en la partitura, desviacions de tempo ocorregudes en les actuacions musicals i diferències de tònica i afinació. La metodologia aprofita els resultats de l’anàlisi de la partitura i l’àudio per enllaçar la informació simbòlica amb l’àudio. A més, la tècnica d’alineament s’utilitza per obtenir descripcions de l’àudio fonamentades en la partitura. L’anàlisi de l’àudio fonamentat en la partitura no només simplifica les fases d’extracció de característiques d’àudio que requeririen de mètodes de processament d’àudio sofisticats, sinó que a més millora substancialment els resultats comparat amb altres mètodes d´ultima generació que només depenen de contingut d’àudio. Les metodologies d’anàlisi presentades s’han utilitzat per analitzar el corpus CompMusic Ottoman-Turkish makam music i s’han integrat en una aplicació web destinada al descobriment musical de tradicions culturals específiques. Algunes de les metodologies ja han sigut també aplicades a altres tradicions musicals com la Hindustani, la Carnàtica i la Grega. Seguint els preceptes de la investigació oberta totes les dades creades, eines computacionals i resultats dels anàlisis estan disponibles obertament. Tant les metodologies, les eines i el corpus en si mateix proporcionen àmplies oportunitats per recerques futures en diversos camps de recerca tal i com la musicologia computacional, la extracció d’informació musical i la educació musical. Traducció d’anglès a català per Oriol Romaní Picas.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Tavares, Lucas Alves. « O envolvimento da proteína adaptadora 1 (AP-1) no mecanismo de regulação negativa do receptor CD4 por Nef de HIV-1 ». Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/17/17136/tde-06012017-113215/.

Texte intégral
Résumé :
O Vírus da Imunodeficiência Humana (HIV) é o agente etiológico da Síndrome da Imunodeficiência Adquirida (AIDS). A AIDS é uma doença de distribuição mundial, e estima-se que existam atualmente pelo menos 36,9 milhões de pessoas infectadas com o vírus. Durante o seu ciclo replicativo, o HIV promove diversas alterações na fisiologia da célula hospedeira a fim de promover sua sobrevivência e potencializar a replicação. A rápida progressão da infecção pelo HIV-1 em humanos e em modelos animais está intimamente ligada à função da proteína acessória Nef. Dentre as diversas ações de Nef está a regulação negativa de proteínas importantes na resposta imunológica, como o receptor CD4. Sabe-se que esta ação resulta da indução da degradação de CD4 em lisossomos, mas os mecanismos moleculares envolvidos ainda são totalmente elucidados. Nef forma um complexo tripartite com a cauda citosólica de CD4 e a proteína adaptadora 2 (AP-2), em vesículas revestidas por clatrina nascentes, induzindo a internalização e degradação lisossomal de CD4. Pesquisas anteriores demonstraram que o direcionamento de CD4 aos lisossomos por Nef envolve a entrada do receptor na via dos corpos multivesiculares (MVBs), por um mecanismo atípico, pois, embora não necessite da ubiquitinação de carga, depende da ação de proteínas que compõem os ESCRTs (Endosomal Sorting Complexes Required for Transport) e da ação de Alix, uma proteína acessória da maquinaria ESCRT. Já foi reportado que Nef interage com subunidades dos complexos AP-1, AP-2, AP-3 e Nef não parece interagir com subunidades de AP-4 e AP-5. Entretanto, o papel da interação de Nef com AP-1 e AP-3 na regulação negativa de CD4 ainda não está totalmente elucidado. Ademais, AP-1, AP-2 e AP-3 são potencialmente heterogêneos devido à existência de isoformas múltiplas das subunidades codificadas por diferentes genes. Todavia, existem poucos estudos para demonstrar se as diferentes combinações de isoformas dos APs são formadas e se possuem propriedades funcionais distintas. O presente trabalho procurou identificar e caracterizar fatores celulares envolvidos na regulação do tráfego intracelular de proteínas no processo de regulação negativa de CD4 induzido por Nef. Mais especificamente, este estudo buscou caracterizar a participação do complexo AP-1 na modulação negativa de CD4 por Nef de HIV-1, através do estudo funcional das duas isoformas de ?-adaptina, subunidades de AP-1. Utilizando a técnica de Pull-down demonstramos que Nef é capaz de interagir com ?2. Além disso, nossos dados de Imunoblot indicaram que a proteína ?2-adaptina, e não ?1-adaptina, é necessária no processo de degradação lisossomal de CD4 por Nef e que esta participação é conservada para degradação de CD4 por Nef de diferentes cepas virais. Ademais, por citometria de fluxo, o silenciamento de ?2, e não de ?1, compromete a diminuição dos níveis de CD4 por Nef da membrana plasmática. A análise por imunofluorêsncia indireta também revelou que a diminuição dos níveis de ?2 impede a redistribuição de CD4 por Nef para regiões perinucleares, acarretando no acúmulo de CD4, retirados por Nef da membrana plasmática, em endossomos primários. A depleção de ?1A, outra subunidade de AP-1, acarretou na diminuição dos níveis celulares de ?2 e ?1, bem como, no comprometimento da eficiente degradação de CD4 por Nef. Além disso, foi possível observar que, ao perturbar a maquinaria ESCRT via super-expressão de HRS (uma subunidade do complexo ESCRT-0), ocorreu um acumulo de ?2 em endossomos dilatados contendo HRS-GFP, nos quais também detectou-se CD4 que foi internalizado por Nef. Em conjunto, os resultados indicam que ?2-adaptina é uma importante molécula para o direcionamento de CD4 por Nef para a via ESCRT/MVB, mostrando ser uma proteína relevante no sistema endo-lisossomal. Ademais, os resultados indicaram que as isoformas ?-adaptinas não só possuem funções distintas, mas também parecem compor complexos AP-1 com diferentes funções celulares, já que apenas a variante AP-1 contendo ?2, mas não ?1, participa da regulação negativa de CD4 por Nef. Estes estudos contribuem para o melhor entendimento dos mecanismos moleculares envolvidos na atividade de Nef, que poderão também ajudar na melhor compreensão da patogênese do HIV e da síndrome relacionada. Em adição, este trabalho contribui para o entendimento de processos fundamentais da regulação do tráfego de proteínas transmembrana no sistema endo-lisossomal.
The Human Immunodeficiency Virus (HIV) is the etiologic agent of Acquired Immunodeficiency Syndrome (AIDS). AIDS is a disease which has a global distribution, and it is estimated that there are currently at least 36.9 million people infected with the virus. During the replication cycle, HIV promotes several changes in the physiology of the host cell to promote their survival and enhance replication. The fast progression of HIV-1 in humans and animal models is closely linked to the function of an accessory protein Nef. Among several actions of Nef, one is the most important is the down-regulation of proteins from the immune response, such as the CD4 receptor. It is known that this action causes CD4 degradation in lysosome, but the molecular mechanisms are still incompletely understood. Nef forms a tripartite complex with the cytosolic tail of the CD4 and adapter protein 2 (AP-2) in clathrin-coated vesicles, inducing CD4 internalization and lysosome degradation. Previous research has demonstrated that CD4 target to lysosomes by Nef involves targeting of this receptor to multivesicular bodies (MVBs) pathway by an atypical mechanism because, although not need charging ubiquitination, depends on the proteins from ESCRTs (Endosomal Sorting Complexes Required for Transport) machinery and the action of Alix, an accessory protein ESCRT machinery. It has been reported that Nef interacts with subunits of AP- 1, AP-2, AP-3 complexes and Nef does not appear to interact with AP-4 and AP-5 subunits. However, the role of Nef interaction with AP-1 or AP-3 in CD4 down-regulation is poorly understood. Furthermore, AP-1, AP-2 and AP-3 are potentially heterogeneous due to the existence of multiple subunits isoforms encoded by different genes. However, there are few studies to demonstrate if the different combinations of APs isoforms are form and if they have distinct functional properties. This study aim to identify and characterize cellular factors involved on CD4 down-modulation induced by Nef from HIV-1. More specifically, this study aimed to characterize the involvement of AP-1 complex in the down-regulation of CD4 by Nef HIV-1 through the functional study of the two isoforms of ?-adaptins, AP-1 subunits. By pull-down technique, we showed that Nef is able to interact with ?2. In addition, our data from immunoblots indicated that ?2- adaptin, not ?1-adaptin, is required in Nef-mediated targeting of CD4 to lysosomes and the ?2 participation in this process is conserved by Nef from different viral strains. Furthermore, by flow cytometry assay, ?2 depletion, but not ?1 depletion, compromises the reduction of surface CD4 levels induced by Nef. Immunofluorescence microscopy analysis also revealed that ?2 depletion impairs the redistribution of CD4 by Nef to juxtanuclear region, resulting in CD4 accumulation in primary endosomes. Knockdown of ?1A, another subunit of AP-1, resulted in decreased cellular levels of ?1 and ?2 and, compromising the efficient CD4 degradation by Nef. Moreover, upon artificially stabilizing ESCRT-I in early endosomes, via overexpression of HRS, internalized CD4 accumulates in enlarged HRS-GFP positive endosomes, where co-localize with ?2. Together, the results indicate that ?2-adaptin is a molecule that is essential for CD4 targeting by Nef to ESCRT/MVB pathway, being an important protein in the endo-lysosomal system. Furthermore, the results indicate that ?-adaptins isoforms not only have different functions, but also seem to compose AP-1 complex with distinct cell functions, and only the AP-1 variant comprising ?2, but not ?1, acts in the CD4 down-regulation induced by Nef. These studies contribute to a better understanding on the molecular mechanisms involved in Nef activities, which may also help to improve the understanding of the HIV pathogenesis and the related syndrome. In addition, this work contributes with the understanding of primordial process regulation on intracellular trafficking of transmembrane proteins.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lai, Chih-Hung, et 賴致宏. « The unsupervised band selection methods for progressive data analysis of hyperspectral imagery ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/78332053556447541656.

Texte intégral
Résumé :
碩士
國立中山大學
機械與機電工程學系研究所
104
Band selection (BS) is an important topic that has received wide attention for hyperspectral imaging (HSI) in remote sensing community for many years. In this thesis, two types of unsupervised sequential band selection (SQBS) methods for progressive data analysis of hyperspectral imagery are presented. In the first method, we propose two kinds of algorithms based on two individual perspectives: minimum band similarity and maximum band information. In the other method, we adopt sparse self-representation (SSR) model to assume that all the bands can be represented by a set of representative bands. A famous greedy algorithm, called orthogonal matching pursuit (OMP), is used to solve this optimization problem formulated by SSR model. Both two methods have the following properties: 1. The BS result forms a band sequence so that the progressive data analysis can be carried out. 2. The selected bands are highly un-correlated, that is, the information redundancy of the selected bands is minimized. The experiments implemented on two real hyperspectral datasets show that using the proposed SQBS methods can achieve significantly better performance for progressive classification and spectral unmixing over the conventional sequential band selection methods when the number of selected bands is low.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Sun, Shih-Han, et 孫士函. « Bayesian Analysis of Progressive Type I Interval Censored Data Under General Exponential Distribution ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3zp375.

Texte intégral
Résumé :
碩士
中原大學
應用數學研究所
102
WinBUGS is a commonly used statistical software for Bayesian statistician to execute the Markov chain Monte Carlo method via Metropolis-Hastings algorithm in simple statistical programming codes. However, the operation of such software is not friendly and its outputs are limited. The Bayesian analysis of type I interval censored data from General Exponential distribution was done in Lin and Lio (2012). Yet, this research takes the advantage of data manipulation function of the popular statistical software R and the simple but powerful statistical ability of Bayesian tool WinBUGS to do statistical estimation of type I interval censored data from General Exponential distribution. Specifically, we propose to generate progressive type I interval observations in R and analyze these simulated data with R2WinBUGS package that calls WinBUGS in batch mode. The results are then passed back to R for further analysis. Finally, the simulation studies are done for 500 times to calculate the standard error of the our proposed algorithm.We can see that the results seems satisfactory as expected. Overall, the research apply R2WinBUGS package to analyze the type I interval censoring scheme.
Styles APA, Harvard, Vancouver, ISO, etc.
18

WU, JIUN-HAU, et 吳俊皓. « Bayesian Analysis of Progressive Type II Scheme Data from Weibull Distribution Using R NIMBLE package ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/eu3654.

Texte intégral
Résumé :
碩士
中原大學
應用數學研究所
107
Bayesian statistics have been widely used in industry, finance, medicine and other fields. In the past, it was limited by computational complexity. However, with the advancement of technology, the emergence of many statistical software has enabled statisticians to save cumbersome calculations in simulation analysis. The Bayesian methoduses past experience and combines the collected information for analysis and simulation. In this article, in order to discuss the Bayesian analysis of Weibull Distribution scheme data, we use the NIMBLE of R package to analyze the model parameters, and the NIMBLE can effectively and quickly perform the Markov Chain Monte Carlo method (MCMC). To estimate the parameters of the model, after the iterative operation of the parameters, observe whether the estimated value of the parameter approximates the true value of the parameter.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Shu, Wei-Lin, et 沈暐玲. « Statistical analysis of Rayleigh and Gompertz distributed data under Type II progressive censoring with binomial and random removals ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/20770525051513504945.

Texte intégral
Résumé :
碩士
淡江大學
統計學系
91
Studies in the lifetimes of the organisms and products are often the main research topics in the nature and industries. The past research developed some censored methods. Before the experiments did not end yet, some patients might not proceed the experiments due to some factors. Furthermore, the experiments were ended early because some experimenters might be hazardous for the experiments during the experiments or not tally with the demands of the experiments. In order to resolve these problems, progressive removal was developed. This thesis mainly concentrates on the statistical analysis of Rayleigh distributed and Gompertz distributed life time data under Type II progressive censoring with binomial and random removals. To begin with, the thesis discusses the maximum likelihood estimates andcharacteristics of the parameters in the distributions. We respectively narrate binomial and random removals. Then, we separately deduce the expected values of Rayleigh distribution and Gompertz distribution under binomial and random removals. Adjusting the parameter values of the distributions counts up to the expected time. Finally, we farther analyze the expected time of Rayleigh distribution and Gompertz distribution under Type II progressive censoring with binomial and random removals.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Chan, Huang Yu, et 詹煌宇. « Statistical analysis of Extreme-value and Pareto distributed data under Type II progressive censoring with binomial and random removals ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/20869733130148881386.

Texte intégral
Résumé :
碩士
淡江大學
統計學系
90
Studies in the lifetimes of the organisms and products are often the main research topics in the nature and industries. The past research developed some censored methods. Before the experiments did not end yet, some patients might not proceed the experiments due to some factors. Furthermore, the experiments were ended early because some experimenters might be hazardous for the experiments during the experiments or not tally with the demands of the experiments. In order to resolve these problems, progressive removal was developed. This thesis mainly concentrates on the statistical analysis of Extreme-value distributed and Pareto distributed lifetime data under Type II progressive censoring with binomial and random removals. To begin with, the thesis discusses the maximum likelihood estimates and characteristics of the parameters in the distributions. We respectively narrate binomial and random removals. Then, we separately deduce the expected values of Extreme-value distribution and Pareto distribution under binomial and random removals. Adjusting the parameter values of the distributions counts up to the expected time. Finally, we farther analyze the expected time of Extreme-value distribution and Pareto distribution under Type II progressive censoring with binomial and random removals.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Bhalchandra, Noopur Anil. « Shape and progression modelimg and analysis in parkinson's disease through multi-modal data analysis ». Thesis, 2018. http://localhost:8080/xmlui/handle/12345678/7698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Lin, Jhang-cyuan, et 林章權. « Bayesian Analysis and Optimal Design in Accelerated Life Tests with Progressively censored Data ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/jh9b4g.

Texte intégral
Résumé :
碩士
國立中央大學
統計研究所
96
Accelerated life testing of products is used to get information quickly on their lifetime distribution. In this thesis, we discuss k-stage step-stress accelerated life-tests under the progressively Type-I and Type-II censoring schemes, respectively. An exponential lifetime distribution with mean life time that is a log-linear function of the stress variable is considered. The classical maximum likelihood method as well as a fully Bayesian method based on the Markov chain Monte Carlo (MCMC) technique are developed for inference on all the related parameters. From empirical studies, it shows that the Bayesian methodology is quite accurate in drawing inference even in the case of noninformative prior. Under the equi-spaced stress experiment, the optimal stress increment and equal time duration are investigated for Type-I censored data, whereas the optimal sampling plan is discussed under Type-II censoring scheme together with a criterion based on minimizing the expected experimental time and variance of the maximum likelihood estimator simultaneously. Numerical examples are presented for illustration.
Styles APA, Harvard, Vancouver, ISO, etc.
23

« Towards Robust Machine Learning Models for Data Scarcity ». Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57014.

Texte intégral
Résumé :
abstract: Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision. To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data. To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency. In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2020
Styles APA, Harvard, Vancouver, ISO, etc.
24

Freitas, Pedro. « The reasons behind the progression in PISA scores : an education production function approach using semi-parametric techniques ». Master's thesis, 2015. http://hdl.handle.net/10362/17452.

Texte intégral
Résumé :
Research Masters
On December 2013, and for the fifth time since 2000, OECD published the results of the latest PISA survey, providing a view on how the students' performance has progressed during the last 12 years. Using PISA data we follow an education production function, which states that variables related to students, their family and the school explain the output, measured as the individual student achievement. Exploring the concept of efficiency we measure the ability that each student has to transform the given inputs into higher academic outcomes. Such analysis was performed through the estimation of an efficient frontier,derived by non-parametric techniques, namely Data Envelopment Analysis (DEA). Using this methodology we establish two vectors of analysis. The first one intends to disentangle the reasons behind the evolution in PISA scores across the years, concluding that the variation in inputs is on the core of the reasons to explain the evolution in PISA results. The second aims to evaluate what are the sources of student's efficiency. On this topic we particularly explore the role of the school inputs, concluding that students with a more favourable socio-economic background are more indifferent to variables such as class size and school size.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Devkota, Jyoti U. « Modeling and projecting Nepal´s Mortality and Fertility ». Doctoral thesis, 2000. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2000092625.

Texte intégral
Résumé :
The objective behind this study was to mathematically analyse, model and forecast the vital rates (mortality and fertility) of Nepal. In order to attain this goal, the data have been converted into tables and analysed intensively using several softwares such as Mocrosoft Excel, SPSS, Mathematica. The margin of error of data has been analysed. In Chapter 4, the error and uncertainity in the data have been analysed using Bayesian analysis. The reliability of the data of Nepal has been compared with the reliability of the data of Germany. The mortality and fertility conditions of Nepal have been compared from two angles. Data on India (particularly north India) have provided comparison on the socio-economic grounds whereas data on Germany(with accurate and abundant data) have provided comparison on the ground of data availability and accuracy. Thus in addition to analysing and modeling the data, the regional behaviour has been studied. The limited and defective data of Nepal have posed a challange at every stage and phase. Because of this very long term forecasting of mortality could not be made. But the model has provided a lot of information on the mortality for the years for which the data were lacking. But in the comming future, with new data at hand and with the new models developed here, it could be possible to do long term projections. In the less developed world, rural and urban areas have a big impact on the mortality and fertility of a country. The rural and urban effects on mortality and fertility have been studied individually. While analyzing the mortality scene of Nepal, it has been observed that the mortality is decreasing. The decrease is slow, but it reflects the advancement in medical facilities and health awareness. The fertility is also decreasing. There is a decrease in the number of children per woman and per family. This decrease is more pronounced in the urban areas as compared to the rural areas. This also reflects that the family planning programmes launched are showing results, particularly in urban areas.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie