Rozprawy doktorskie na temat „Machine learning tools”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Machine learning tools”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Kanwar, John. "Smart cropping tools with help of machine learning". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74827.
Pełny tekst źródłaMaskinlärning har funnits en lång tid. Deras jobb varierar från flera olika ämnen. Allting från självkörande bilar till data mining. När en person tar en bild med en mobiltelefon händer det lätt att bilden är lite sned. Det händer också att en tar spontana bilder med sin mobil, vilket kan leda till att det kommer med något i kanten av bilden som inte bör vara där. Det här examensarbetet kombinerar maskinlärning med fotoredigeringsverktyg. Det kommer att utforska möjligheterna hur maskinlärning kan användas för att automatiskt beskära bilder estetsikt tilltalande samt hur maskinlärning kan användas för att skapa ett porträttbeskärningsverktyg. Det kommer även att gå igenom hur en räta-till-funktion kan bli implementerad med hjälp av maskinlärning. Till sist kommer det att jämföra dessa verktyg med andra programs automatiska beskärningsverktyg.
Nordin, Alexander Friedrich. "End to end machine learning workflow using automation tools". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119776.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
We have developed an open source library named Trane and integrated it with two open source libraries to build an end-to-end machine learning workflow that can facilitate rapid development of machine learning models. The three components of this workflow are Trane, Featuretools and ATM. Trane enumerates tens of prediction problems relevant to any dataset using the meta information about the data. Furthermore, Trane generates training examples required for training machine learning models. Featuretools is an open-source software for automatically generating features from a dataset. Auto Tune Models (ATM), an open source library, performs a high throughput search over modeling options to find the best modeling technique for a problem. We show the capability of these three tools and highlight the open-source development of Trane.
by Alexander Friedrich Nordin.
M. Eng.
Jalali, Mana. "Voltage Regulation of Smart Grids using Machine Learning Tools". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/95962.
Pełny tekst źródłaWith advent of renewable energies into the power systems, innovative and automatic monitoring and control techniques are required. More specifically, voltage regulation for distribution grids with solar generation is a can be a challenging task. Moreover, due to frequency and intensity of the voltage changes, traditional utility-owned voltage regulation equipment are not useful in long term. On the other hand, smart inverters installed with solar panels can be used for regulating the voltage. Smart inverters can be programmed to inject or absorb reactive power which directly influences the voltage. Utility can monitor, control and sync the inverters across the grid to maintain the voltage within the desired limits. Machine learning and optimization techniques can be applied for automation of voltage regulation in smart grids using the smart inverters installed with solar panels. In this work, voltage regulation is addressed by reactive power control.
Viswanathan, Srinidhi. "ModelDB : tools for machine learning model management and prediction storage". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113540.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Building a machine learning model is often an iterative process. Data scientists train hundreds of models before finding a model that meets acceptable criteria. But tracking these models and remembering the insights obtained from them is an arduous task. In this thesis, we present two main systems for facilitating better tracking, analysis, and querying of scikit-learn machine learning models. First, we introduce our scikit-learn client for ModelDB, a novel end-to-end system for managing machine learning models. The client allows data scientists to easily track diverse scikit-learn workflows with minimal changes to their code. Then, we describe our extension to ModelDB, PredictionStore. While the ModelDB client enables users to track the different models they have run, PredictionStore creates a prediction matrix to tackle the remaining piece in the puzzle: facilitating better exploration and analysis of model performance. We implement a query API to assist in analyzing predictions and answering nuanced questions about models. We also implement a variety of algorithms to recommend particular models to ensemble utilizing the prediction matrix. We evaluate ModelDB and PredictionStore on different datasets and determine ModelDB successfully tracks scikit-learn models, and most complex model queries can be executed in a matter of seconds using our query API. In addition, the workflows demonstrate significant improvement in accuracy using the ensemble algorithms. The overall goal of this research is to provide a flexible framework for training scikit-learn models, storing their predictions/ models, and efficiently exploring and analyzing the results.
by Srinidhi Viswanathan.
M. Eng.
Borodavkina, Lyudmila 1977. "Investigation of machine learning tools for document clustering and classification". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/8932.
Pełny tekst źródłaIncludes bibliographical references (leaves 57-59).
Data clustering is a problem of discovering the underlying data structure without any prior information about the data. The focus of this thesis is to evaluate a few of the modern clustering algorithms in order to determine their performance in adverse conditions. Synthetic Data Generation software is presented as a useful tool both for generating test data and for investigating results of the data clustering. Several theoretical models and their behavior are discussed, and, as the result of analysis of a large number of quantitative tests, we come up with a set of heuristics that describe the quality of clustering output in different adverse conditions.
by Lyudmila Borodavkina.
M.Eng.
Song, Qi. "Developing machine learning tools to understand transcriptional regulation in plants". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93512.
Pełny tekst źródłaDoctor of Philosophy
Abiotic stresses constitute a major category of stresses that negatively impact plant growth and development. It is important to understand how plants cope with environmental stresses and reprogram gene responses which in turn confers stress tolerance to plants. Genomics technology has been used in past decade to generate gene expression data under different abiotic stresses for the model plant, Arabidopsis. Recent new genomic technologies, such as DAP-seq, have generated large scale regulatory maps that provide information regarding which gene has the potential to regulate other genes in the genome. However, this technology does not provide context specific interactions. It is unknown which transcription factor can regulate which gene under a specific abiotic stress condition. To address this challenge, several computational tools were developed to identify regulatory interactions and co-regulating genes for stress response. In addition, using single cell RNA-seq data generated from the model plant organism Arabidopsis, preliminary analysis was performed to build model that classifies Arabidopsis root cell types. This analysis is the first step towards the ultimate goal of constructing cell-typespecific regulatory network for Arabidopsis, which is important for improving current understanding of stress response in plants.
Nagler, Dylan Jeremy. "SCHUBOT: Machine Learning Tools for the Automated Analysis of Schubert’s Lieder". Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:12705172.
Pełny tekst źródłaParikh, Neena (Neena S. ). "Interactive tools for fantasy football analytics and predictions using machine learning". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100687.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 83-84).
The focus of this project is multifaceted: we aim to construct robust predictive models to project the performance of individual football players, and we plan to integrate these projections into a web-based application for in-depth fantasy football analytics. Most existing statistical tools for the NFL are limited to the use of macro-level data; this research looks to explore statistics at a finer granularity. We explore various machine learning techniques to develop predictive models for different player positions including quarterbacks, running backs, wide receivers, tight ends, and kickers. We also develop an interactive interface that will assist fantasy football participants in making informed decisions when managing their fantasy teams. We hope that this research will not only result in a well-received and widely used application, but also help pave the way for a transformation in the field of football analytics.
by Neena Parikh.
M. Eng.
Arango, Argoty Gustavo Alonso. "Computational Tools for Annotating Antibiotic Resistance in Metagenomic Data". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88987.
Pełny tekst źródłaDoctor of Philosophy
Antimicrobial resistance (AMR) is one of the biggest threats to human public health. It has been estimated that the number of deaths caused by AMR will surpass the ones caused by cancer on 2050. The seriousness of these projections requires urgent actions to understand and control the spread of AMR. In the last few years, metagenomics has stand out as a reliable tool for the analysis of the microbial diversity and the AMR. By the use of next generation sequencing, metagenomic studies can generate millions of short sequencing reads that are processed by computational tools. However, with the rapid adoption of metagenomics, a large amount of data has been generated. This situation requires the development of computational tools and pipelines to manage the data scalability, accessibility, and performance. In this thesis, several strategies varying from command line, web-based platforms to machine learning have been developed to address these computational challenges. In particular, by the development of computational pipelines to process metagenomics data in the cloud and distributed systems, the development of machine learning and deep learning tools to ease the computational cost of detecting antibiotic resistance genes in metagenomic data, and the integration of crowdsourcing as a way to curate and validate antibiotic resistance genes.
Schildt, Alexandra, i Jenny Luo. "Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.
Pełny tekst źródłaAI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
Madeo, Giovanni <1994>. "Machine learning tools for protein annotation: the cases of transmembrane β-barrel and myristoylated proteins". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10169/1/TesiMadeo.pdf.
Pełny tekst źródłaBALDAZZI, GIULIA. "Advanced signal processing and machine learning tools for non-invasive foetal electrocardiography and intracardiac electrophysiology". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1082764.
Pełny tekst źródłaPaiano, Michele. "Sperimentazione di tools per la creazione e l'addestramento di Generative Adversarial Networks". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Znajdź pełny tekst źródłaJarvis, Matthew P. "Applying machine learning techniques to rule generation in intelligent tutoring systems". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-112724.
Pełny tekst źródłaKeywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
Dubey, Anshul. "Search and Analysis of the Sequence Space of a Protein Using Computational Tools". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14115.
Pełny tekst źródłaKrishnan, Vidhya Gomathi. "Novel approaches to predict the effect of single nucleotide polymorphisms on protein function using machine learning tools". Thesis, University of Leeds, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400943.
Pełny tekst źródłaWang, Kai. "DEVELOPMENT OF MACHINE LEARNING BASED BIOINFORMATICS TOOLS FORCRISPR DETECTION, PIRNA IDENTIFICATION, AND WHOLE-GENOME BISULFITESEQUENCING DATA ANALYSIS". Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1546437447863901.
Pełny tekst źródłaRampton, Travis Michael. "Deformation Twin Nucleation and Growth Characterization in Magnesium Alloys Using Novel EBSD Pattern Analysis and Machine Learning Tools". BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4451.
Pełny tekst źródłaFredriksson, Franzén Måns, i Nils Tyrén. "Anomaly detection for automated security log analysis : Comparison of existing techniques and tools". Thesis, Linköpings universitet, Databas och informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177728.
Pełny tekst źródłaDESSI', DANILO. "Knowledge Extraction from Textual Resources through Semantic Web Tools and Advanced Machine Learning Algorithms for Applications in Various Domains". Doctoral thesis, Università degli Studi di Cagliari, 2020. http://hdl.handle.net/11584/285376.
Pełny tekst źródłaBrown, Ryan Charles. "Development of Ground-Level Hyperspectral Image Datasets and Analysis Tools, and their use towards a Feature Selection based Sensor Design Method for Material Classification". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84944.
Pełny tekst źródłaPh. D.
Lee, Ji Hyun. "Development of a Tool to Assist the Nuclear Power Plant Operator in Declaring a State of Emergency Based on the Use of Dynamic Event Trees and Deep Learning Tools". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.
Pełny tekst źródłaFuentes, Antonio. "Proactive Decision Support Tools for National Park and Non-Traditional Agencies in Solving Traffic-Related Problems". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88727.
Pełny tekst źródłaDoctor of Philosophy
In this dissertation, a transportation system located in Jackson, Wyoming under the jurisdiction of the Grand Teton National Park and recognized as the Moose-Wilson Corridor is evaluated to identify transportation-related factors that influence its operational performance. The evaluation considers its unique prevalent conditions and takes into account future management strategies. Furthermore, emerging analytical strategies are implemented to identify and address transportation system operational concerns. Thus, in this dissertation, decision support tools for the evaluation of a unique system in a National Park are presented in four distinct manuscripts. The manuscripts cover traditional approaches that breakdown and evaluate traffic operations and identify mitigation strategies. Additionally, emerging strategies for the evaluation of data with machine learning approaches are implemented on GPS-tracks to determine vehicles stopping at park attractions. Lastly, an agent-based model is developed in a flexible platform to utilize previous findings and evaluate the Moose-Wilson corridor while considering future policy constraints and the unique natural interactions between visitors and prevalent ecological and wildlife.
Tang, Danny M. Eng Massachusetts Institute of Technology. "Empowering novices to understand and use machine learning with personalized image classification models, intuitive analysis tools, and MIT App Inventor". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123130.
Pełny tekst źródłaThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 129-131).
As machine learning permeates our society and manifests itself through commonplace technologies such as autonomous vehicles, facial recognition, and online store recommendations, it is necessary that the increasing number of people who rely on these tools understand how they work. As such, we need to develop effective tools and curricula for introducing machine learning to novices. My work focuses on teaching core machine learning concepts with image classification, one of the most basic and widespread examples of machine learning. I built a web interface that allows users to train and test personalized image classification models on pictures taken with their computers--webcams. Furthermore, I built an extension for MIT App Inventor, a platform for building mobile applications using a blocks-based programming language, that allows users to use the models they built in the web interface to classify objects in their mobile applications. Finally, I created high school level curricula for workshops based on using the aforementioned interface and App Inventor extension, and ran the workshops with two classes of high school students from Boston Latin Academy. My findings indicate that high school students with no machine learning background are able to learn and understand general concepts and applications of machine learning through hands-on, non-technical activities, as well as successfully utilize models they built for personal use.
by Danny Tang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
May, Madeth. "Using tracking data as reflexive tools to support tutors and learners in distance learning situations : an application to computer-mediated communications". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0106/these.pdf.
Pełny tekst źródłaThis research effort focuses particularly on the traces of synchronous and asynchronous interactions on Computer-Mediated Communication tools (CMC), in situations of discussions, negotiations and arguments among learners. The main objective is to study how to use the collected traces to design "reflexive tools" for the participants in the learning process. Reflexive tools refer to the useful data indicators computed from the collected traces that support the participants in terms of awareness, assessment and evaluation of their CMC activities. We explored different tracking approaches and their limitations regarding traces collection, traces structuring, and traces visualization. To improve upon these limitations, we have proposed (i) an explicit tracking approach to efficiently track the CMC activities, (ii) a generic model of CMC traces to answer to the problems of CMC traces structuring, interoperability and reusability, and (iii) a platform TrAVis (Tracking Data Analysis and Visualization tools), specifically designed and developed to assist the participants, both the tutors and learners in the task of exploiting the CMC traces. Another crucial part of this research is the design of data indicators. The main objective is to propose different sets of data indicators in graphical representations in order to enhance the visualization and analysis of the information of CMC activities. Three case studies and an experiment in an authentic learning situation have been conducted during this research work to evaluate the technical aspects of the tracking approach and the utility of TrAVis according to the pedagogical and learning objectives of the participants
Kruczyk, Marcin. "Rule-Based Approaches for Large Biological Datasets Analysis : A Suite of Tools and Methods". Doctoral thesis, Uppsala, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-206137.
Pełny tekst źródłaTorabi, Moghadam Behrooz. "Computational discovery of DNA methylation patterns as biomarkers of ageing, cancer, and mental disorders : Algorithms and Tools". Doctoral thesis, Uppsala universitet, Institutionen för cell- och molekylärbiologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-320720.
Pełny tekst źródłaQuaranta, Giacomo. "Efficient simulation tools for real-time monitoring and control using model order reduction and data-driven techniques". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667474.
Pełny tekst źródłaLa simulación numérica, el uso de ordenadores para ejecutar un programa que implementa un modelo matemático de un sistema físico, es una parte importante del mundo tecnológico actual. En muchos campos de la ciencia y la ingeniería es necesario estudiar el comportamiento de sistemas cuyos modelos matemáticos son demasiado complejos para proporcionar soluciones analíticas, haciendo posible la evaluación virtual de las respuestas de los sistemas (gemelos virtuales). Esto reduce drásticamente el número de pruebas experimentales para los diseños precisos del sistema real que el modelo numérico representa. Sin embargo, estos gemelos virtuales, basados en métodos clásicos que hacen uso de una rica representación del sistema (por ejemplo, el método de elementos finitos), rara vez permiten la retroalimentación en tiempo real, incluso cuando se considera la computación en plataformas de alto rendimiento. En estas circunstancias, el rendimiento en tiempo real requerido en algunas aplicaciones se ve comprometido. En efecto, los gemelos virtuales son estáticos, es decir, se utilizan en el diseño de sistemas complejos y sus componentes, pero no se espera que acomoden o asimilen los datos para definir sistemas de aplicación dinámicos basados en datos. Además, se suelen apreciar desviaciones significativas entre la respuesta observada y la predicha por el modelo, debido a inexactitudes en los modelos empleados, en la determinación de los parámetros del modelo o en su evolución temporal. En esta tesis se proponen diferentes métodos para resolver estas limitaciones con el fin de realizar un seguimiento y un control en tiempo real. En la primera parte se utilizan técnicas de Reducción de Modelos para satisfacer las restricciones en tiempo real; estas técnicas calculan una buena aproximación de la solución simplificando el procedimiento de resolución en lugar del modelo. La precisión de la solución no se ve comprometida y se pueden realizar simulaciones efficientes (gemelos digitales). En la segunda parte se emplea la modelización basada en datos para llenar el vacío entre la solución paramétrica, calculada utilizando técnicas de reducción de modelos no intrusivas, y los campos medidos, con el fin de hacer posibles los sistemas de aplicación dinámicos basados en datos (gemelos híbridos).
La simulation numérique, c'est-à-dire l'utilisation des ordinateurs pour exécuter un programme qui met en oeuvre un modèle mathématique d'un système physique, est une partie importante du monde technologique actuel. Elle est nécessaire dans de nombreux domaines scientifiques et techniques pour étudier le comportement de systèmes dont les modèles mathématiques sont trop complexes pour fournir des solutions analytiques et elle rend possible l'évaluation virtuelle des réponses des systèmes (jumeaux virtuels). Cela réduit considérablement le nombre de tests expérimentaux nécessaires à la conception précise du système réel que le modèle numérique représente. Cependant, ces jumeaux virtuels, basés sur des méthodes classiques qui utilisent une représentation fine du système (ex. méthode des éléments finis), permettent rarement une rétroaction en temps réel, même dans un contexte de calcul haute performance, fonctionnant sur des plates-formes puissantes. Dans ces circonstances, les performances en temps réel requises dans certaines applications sont compromises. En effet, les jumeaux virtuels sont statiques, c'est-à-dire qu'ils sont utilisés dans la conception de systèmes complexes et de leurs composants, mais on ne s'attend pas à ce qu'ils prennent en compte ou assimilent des données afin de définir des systèmes d'application dynamiques pilotés par les données. De plus, des écarts significatifs entre la réponse observée et celle prévue par le modèle sont généralement constatés en raison de l'imprécision des modèles employés, de la détermination des paramètres du modèle ou de leur évolution dans le temps. Dans cette thèse, nous proposons di érentes méthodes pour résoudre ces handicaps afin d'effectuer une surveillance et un contrôle en temps réel. Dans la première partie, les techniques de Réduction de Modèles sont utilisées pour tenir compte des contraintes en temps réel ; elles calculent une bonne approximation de la solution en simplifiant la procédure de résolution plutôt que le modèle. La précision de la solution n'est pas compromise et des simulations e caces peuvent être réalisées (jumeaux numériquex). Dans la deuxième partie, la modélisation pilotée par les données est utilisée pour combler l'écart entre la solution paramétrique calculée, en utilisant des techniques de réduction de modèles non intrusives, et les champs mesurés, afin de rendre possibles des systèmes d'application dynamiques basés sur les données (jumeaux hybrides).
Rauschenberger, Maria. "Early screening of dyslexia using a language-independent content game and machine learning". Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667692.
Pełny tekst źródłaChildren with dyslexia have difficulties in learning how to read and write. They are often diagnosed after they fail in school, even though dyslexia is not related to general intelligence. In this thesis, we present an approach for earlier screening of dyslexia using a language-independent game in combination with machine learning models trained with the interaction data. By earlier we mean before children learn how to read and write. To reach this goal, we designed the game content with knowledge of the analysis of word errors from people with dyslexia in different languages and the parameters reported to be related to dyslexia, such as auditory and visual perception. With our two designed games (MusVis and DGames) we collected data sets (313 and 137 participants) in different languages (mainly Spanish and German) and evaluated them with machine learning classifiers. For MusVis we mainly use content that refers to one single acoustic or visual indicator, while DGames content refers to generic content related to various indicators. Our method provides an accuracy of 0.74 for German and 0.69 for Spanish and F1-scores of 0.75 for German and 0.75 for Spanish in MusVis when Random Forest and Extra Trees are used in . DGames was mainly evaluated with German and reached a peak accuracy of 0.67 and a peak F1-score of 0.74. Our results open the possibility of low-cost and early screening of dyslexia through the Web.
Los niños con dislexia tienen dificultades para aprender a leer y escribir. A menudo se les diagnostica después de fracasar en la escuela, incluso aunque la dislexia no está relacionada con la inteligencia general. En esta tesis, presentamos un enfoque para la detección temprana de la dislexia utilizando un juego independiente del idioma en combinación con modelos de aprendizaje automático entrenados con los datos de la interacción. Temprana aquí significa antes que los niños aprenden a leer y escribir. Para alcanzar este objetivo, diseñamos el contenido del juego con el conocimiento del análisis de las palabras de error de las personas con dislexia en diferentes idiomas y los parámetros reportados relacionados con la dislexia, tales como la percepción auditiva y la percepción visual. Con nuestros dos juegos diseñados (MusVis y DGames) recogimos conjuntos de datos (313 y 137 participantes) en diferentes idiomas (principalmente español y alemán) y los evaluamos con clasificadores de aprendizaje automático. Para MusVis utilizamos principalmente contenido que se refiere a un único indicador acústico o visual, mientras que el contenido de DGames se refiere a varios indicadores (también contenido genérico). Nuestro método proporciona una exactitud de 0,74 para alemán y 0,69 para español y una puntuación F1 de 0,75 para alemán y 0,75 para español en MusVis cuando se utilizan Random Forest y Extra Trees. DGames fue evaluado principalmente con alemán y obtiene una exactitud de 0,67 y una puntuación F1 de 0,74. Nuestros resultados abren la posibilidad de una detección precoz y de bajo coste de la dislexia a través de la Web
Cambe, Jordan. "Understanding the complex dynamics of social systems with diverse formal tools". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN043/document.
Pełny tekst źródłaFor the past two decades, electronic devices have revolutionized the traceability of social phenomena. Social dynamics now leave numerical footprints, which can be analyzed to better understand collective behaviors. The development of large online social networks (like Facebook, Twitter and more generally mobile communications) and connected physical structures (like transportation networks and geolocalised social platforms) resulted in the emergence of large longitudinal datasets. These new datasets bring the opportunity to develop new methods to analyze temporal dynamics in and of these systems. Nowadays, the plurality of data available requires to adapt and combine a plurality of existing methods in order to enlarge the global vision that one has on such complex systems. The purpose of this thesis is to explore the dynamics of social systems using three sets of tools: network science, statistical physics modeling and machine learning. This thesis starts by giving general definitions and some historical context on the methods mentioned above. After that, we show the complex dynamics induced by introducing an infinitesimal quantity of new agents to a Schelling-like model and discuss the limitations of statistical model simulation. The third chapter shows the added value of using longitudinal data. We study the behavior evolution of bike sharing system users and analyze the results of an unsupervised machine learning model aiming to classify users based on their profiles. The fourth chapter explores the differences between global and local methods for temporal community detection using scientometric networks. The last chapter merges complex network analysis and supervised machine learning in order to describe and predict the impact of new businesses on already established ones. We explore the temporal evolution of this impact and show the benefit of combining networks topology measures with machine learning algorithms
Costa, Fausto Guzzo da. "Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13112017-105506/.
Pełny tekst źródłaDiversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
Hefke, Lena [Verfasser], Ewgenij [Gutachter] Proschak i Stefan [Gutachter] Knapp. "Using fingerprints and machine learning tools for the prediction of novel dual active compounds for leukotriene A4 hydrolase and soluble epoxide hydrolase / Lena Hefke ; Gutachter: Ewgenij Proschak, Stefan Knapp". Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2020. http://d-nb.info/122685320X/34.
Pełny tekst źródłaAbokersh, Mohamed. "Decision Making Tools for Sustainable Transition Toward Low Carbon Energy Technologies in the Residential Sector". Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/671958.
Pełny tekst źródłaAlineándose con el ambicioso paquete de energía y clima de la UE 2030 para reducir las emisiones de efecto invernadero y reemplazar las fuentes de calor convencionales a través de la presencia de energía renovable para lograr una comunidad de energía neta cero, las partes interesadas en el sector residencial se enfrentan a varios problemas técnicos, económicos y ambientales. cuestiones para cumplir los objetivos de la UE en un futuro próximo. Esta tesis se centra en dos transformaciones estructurales clave necesarias para la transición sostenible hacia la producción de energía limpia: el problema de las tecnologías energéticas bajas en carbono que representan los sistemas de calefacción de distrito solar junto con el almacenamiento de energía estacional, y su aplicación para lograr edificios de energía casi nula. El abordaje de estos desafíos se inicia mediante el uso del diseño y la optimización de sistemas de energía limpia incorporados con el aprendizaje automático y el análisis de datos para desarrollar herramientas de ingeniería de procesos asistida por computadora. Estas herramientas ayudarían a abordar los desafíos de las partes interesadas, contribuyendo así a la transición hacia un futuro más sostenible.
Aligning with the ambitious EU 2030 climate and energy package for cutting the greenhouse emissions and replacing conventional heat sources through the presence of renewable energy share to achieve net-zero-energy community, the stakeholders at residential sector are facing several technical, economic, and environmental issues to meet the EU targets in the near future. This thesis is focusing on two key structural transformations needed for sustainable transition towards clean energy production: the low carbon energy technologies problem represented by the solar district heating systems coupled with seasonal energy storage, and its application to achieve Nearly Zero Energy Buildings. The Tackling for these challenges is instigated through using design and optimization of clean energy systems incorporated with machine learning and data analysis to develop Computer-Aided Process Engineering tools. These tools would help in addressing the stakeholder’s challenges, thus contributing to the transition towards a more sustainable future.
ILARDI, DAVIDE. "Data-driven solutions to enhance planning, operation and design tools in Industry 4.0 context". Doctoral thesis, Università degli studi di Genova, 2023. https://hdl.handle.net/11567/1104513.
Pełny tekst źródłaBELCORE, ELENA. "Generation of a Land Cover Atlas of environmental critic zones using unconventional tools". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2907028.
Pełny tekst źródłaThun, Julia, i Rebin Kadouri. "Automating debugging through data mining". Thesis, KTH, Data- och elektroteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203244.
Pełny tekst źródłaDagens system genererar stora mängder av loggmeddelanden. Dessa meddelanden kan effektivt lagras, sökas och visualiseras genom att använda sig av logghanteringsverktyg. Analys av loggmeddelanden ger insikt i systemets beteende såsom prestanda, serverstatus och exekveringsfel som kan uppkomma i webbapplikationer. iStone AB vill undersöka möjligheten att automatisera felsökning. Eftersom iStone till mestadels utför deras felsökning manuellt så tar det tid att hitta fel inom systemet. Syftet var att därför att finna olika lösningar som reducerar tiden det tar att felsöka. En analys av loggmeddelanden inom access – och konsolloggar utfördes för att välja de mest lämpade data mining tekniker för iStone’s system. Data mining algoritmer och logghanteringsverktyg jämfördes. Resultatet av jämförelserna visade att ELK Stacken samt en blandning av Eclat och en hybrid algoritm (Eclat och Apriori) var de lämpligaste valen. För att visa att så är fallet så implementerades ELK Stacken och Eclat. De framställda resultaten visar att data mining och användning av en plattform för logganalys kan underlätta och minska den tid det tar för att felsöka.
GALLO, Giuseppe. "Architettura e second digital turn, l’evoluzione degli strumenti informatici e il progetto". Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/514731.
Pełny tekst źródłaThe digital condition that has gradually hybridized our lives, transforming atoms into bits, has now cemented itself in our society, enriching post-modernity and determining a new form of liquidity that has sharpened with the advent of the internet. It is a historical moment marked by a new digital maturity, evident in our diverse relationship to data and in the spread of advanced machine learning methods, which both promise a new understanding of contemporary complexity as well as contribute to the propagation of the technical apparatus throughout the world. These changes, so profound as to affect our culture, are changing our way of perceiving space, and therefore of inhabiting it: conditions that undoubtedly have repercussions on architectural design in its capacity as a human activity geared towards human beings. The increased complexity that has touched our discipline with Postmodernism has meanwhile found new support in Derridian deconstruction, in a historical moment marked by great emphasis on the opportunities that digital tools offer. These are means we first welcomed into our discipline exclusively as tools for representation, and ones that then themselves determined the emergence of new approaches based on the inclusive potential of continuity and variation. None of the protagonists of the first digital turn could probably have imagined the effects that digital culture would now be having on architectural design. A digital culture that has become increasingly stronger due to almost thirty years of both methodological and formal experimentation, as well as to organizational and instrumental changes, starting with the rise of BIM to new algorithmic possibilities represented by visual programming languages and numerical simulations. These have been the primary tools of concentration in the push towards digital, a digital which today has reached a second turn in the field of architecture, identified by Carpo in new design approaches that are now possible thanks to the larger availability of data. A condition that inevitably affects both science and architectural design, but which, nevertheless, fails to fully share a contemporaneity where technology spreads its wings as far as architecture is concerned, thus affecting the meaning of our role within society. With these multifaceted considerations as a starting point, and fully aware of how complex the dialogue we must engage in in order to reconstruct a neutral, historical, and organic as possible vision of the phase that architecture is experiencing, it is my opinion a holistic approach must be established by us. One that is both inclusive and capable of expanding to the point of acquiring a philosophical perspective, as well as being able to attend to areas that cover technical, operational, methodological, instrumental, and relational details. This objective is one I have striven to keep alive throughout the three years of my doctoral research, which in its various phases looks at the mutations that digital technology is producing in society and therefore in architectural design. My research is enriched by the inclusion of ten interviews with prominent protagonists of contemporary architecture, for whose time and availability I am grateful. These testimonials allowed me to see the complexities of contemporary design up close and personal, and they represent a central part of this thesis, which equally aims to provide a historical interpretation of the challenges posed by contemporaneity and to identify the responsibilities that we must uphold for human beings to remain at the centre of our work.
Wusteman, Judith. "EBKAT : an explanation-based knowledge acquisition tool". Thesis, University of Exeter, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280682.
Pełny tekst źródłaCooper, Clayton Alan. "Milling Tool Condition Monitoring Using Acoustic Signals and Machine Learning". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1575539872711423.
Pełny tekst źródłaBUBACK, SILVANO NOGUEIRA. "USING MACHINE LEARNING TO BUILD A TOOL THAT HELPS COMMENTS MODERATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19232@1.
Pełny tekst źródłaOne of the main changes brought by Web 2.0 is the increase of user participation in content generation mainly in social networks and comments in news and service sites. These comments are valuable to the sites because they bring feedback and motivate other people to participate and to spread the content. On the other hand these comments also bring some kind of abuse as bad words and spam. While for some sites their own community moderation is enough, for others this impropriate content may compromise its content. In order to help theses sites, a tool that uses machine learning techniques was built to mediate comments. As a test to compare results, two datasets captured from Globo.com were used: the first one with 657.405 comments posted through its site and the second with 451.209 messages captured from Twitter. Our experiments show that best result is achieved when comment learning is done according to the subject that is being commented.
Hegemann, Lena. "Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281339.
Pełny tekst źródłaFramsteg inom kreativ artificiell intelligens (AI) har lett till system som aktivt kan samarbeta med designers under idéutformningsprocessen, dvs vid skapande, utveckling och kommunikation av idéer. I grupparbete är det viktigt att kunna göra förslag och förklara resonemanget bakom dem, samt förstå de andra gruppmedlemmarnas resonemang. Detta ökar reflektionsförmågan och förtroende hos medlemmarna, samt underlättar sammanjämkning av mål och ger inspiration genom att höra olika perspektiv. Trots att system, baserade på kreativ artificiell intelligens, har förmågan att inspirera genom sina oberoende förslag, utnyttjar de allra senaste kreativa AI-systemen inte dessa fördelar för att facilitera grupparbete. Detta är på grund av AI-systemens bristfälliga förmåga att resonera över sina förslag. Resonemangen är ofta ensidiga, eller saknas totalt. AI-system som kan förklara sina resonemang är redan ett stort forskningsintresse inom många användningsområden. Dock finns det brist på kunskap om AI-systemens påverkan på den kreativa processen. Dessutom är det okänt om en användare verkligen kan dra nytta av möjligheten att kunna förklara sina designbeslut till ett AI-system. Denna avhandling undersöker om ömsesidiga förklaringar, en ny teknik som kombinerar förklaringar från och till ett AI system, kan förbättra designerns och AI:s samarbete under utforskningen av idéer. Jag integrerade ömsesidiga förklaringar i ett AI-hjälpmedel som underlättar skapandet av stämningsplank (eng. mood board), som är en vanlig metod för konceptutveckling. I vår implementering använder AI-systemet textbeskrivningar för att förklara vilka delar av dess förslag som matchar eller kompletterar det nuvarande stämningsplanket. Ibland ber den användaren ge förklaringar, så den kan anpassa sin förslagsstrategi efter användarens önskemål. Vi genomförde en studie med 16 professionella designers som använde verktyget för att skapa stämningsplank. Feedback samlades genom presentationer och semistrukturerade intervjuer. Studien betonade behovet av förklaringar och resonemang som gör principerna bakom AI-systemet transparenta för användaren. Höjd sammanjämkning mellan användarens och systemets mål motiverade deltagarna att ge förklaringar till systemet. Genom att göra det möjligt för användare att förklara sina designbeslut för AI-systemet, förbättrades också användarens reflektionsförmåga över sina val.
Binsaeid, Sultan Hassan. "Multisensor Fusion for Intelligent Tool Condition Monitoring (TCM) in End Milling Through Pattern Classification and Multiclass Machine Learning". Scholarly Repository, 2007. http://scholarlyrepository.miami.edu/oa_dissertations/7.
Pełny tekst źródłaGert, Oskar. "Using Machine Learning as a Tool to Improve Train Wheel Overhaul Efficiency". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171121.
Pełny tekst źródłaEDIN, ANTON, i MARIAM QORBANZADA. "E-Learning as a tool to support the integration of machine learning in product development processes". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279757.
Pełny tekst źródłaDetta forskningsarbete fokuserar på tillämpningar av elektroniska utlärningsmetoder som alternativ till lokala lektioner vid integrering av maskininlärning i produktutvecklingsprocessen. Framförallt är syftet att undersöka om det går att använda elektroniska utlärningsmetoder för att göra maskininlärning mer tillgänglig i produktutvecklingsprocessen. Detta ämne presenterar sig som intressant då en djupare förståelse kring detta banar väg för att effektivisera lärande på distans samt skalbarheten av kunskapsspridning. För att uppnå detta bads två grupper av anställda hos samma företagsgrupp, men tillhörande olika geografiska områden att ta del i ett upplägg av lektioner som författarna hade tagit fram. En grupp fick ta del av materialet genom seminarier, medan den andra bjöds in till att delta i en serie tele-lektioner. När båda deltagargrupper hade genomgått lektionerna fick några deltagare förfrågningar om att bli intervjuade. Några av deltagarnas direkta chefer och projektledare intervjuades även för att kunna jämföra deltagarnas åsikter med icke-deltagande intressenter. En kombination av en kvalitativ teoretisk analys tillsammans med svaren från intervjuerna användes som bas för de presenterade resultaten. Svarande indikerade att de föredrog träningarna som hölls på plats, men vidare kodning av intervjusvaren visade på undervisningsmetoden inte hade större påverkningar på deltagarnas förmåga att ta till sig materialet. Trots att resultatet pekar på att elektroniskt lärande är en teknik med många fördelar verkar det som att brister i teknikens förmåga att integrera mänsklig interaktion hindrar den från att nå sitt fulla potential och därigenom även hindrar dess integration i produktutvecklingsprocessen.
Bheemireddy, Shruthi. "MACHINE LEARNING-BASED ONTOLOGY MAPPING TOOL TO ENABLE INTEROPERABILITY IN COASTAL SENSOR NETWORKS". MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09222009-200303/.
Pełny tekst źródłaHashmi, Muhammad Ali S. M. Massachusetts Institute of Technology. "Said-Huntington Discourse Analyzer : a machine-learning tool for classifying and analyzing discourse". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98543.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-74).
Critical discourse analysis (CDA) aims to understand the link "between language and the social" (Mautner and Baker, 2009), and attempts to demystify social construction and power relations (Gramsci, 1999). On the other hand, corpus linguistics deals with principles and practice of understanding the language produced within large amounts of textual data (Oostdijk, 1991). In my thesis, I have aimed to combine, using machine learning, the CDA approach with corpus linguistics with the intention of deconstructing dominant discourses that create, maintain and deepen fault lines between social groups and classes. As an instance of this technological framework, I have developed a tool for understanding and defining the discourse on Islam in the global mainstream media sources. My hypothesis is that the media coverage in several mainstream news sources tends to contextualize Muslims largely as a group embroiled in conflict at a disproportionately large level. My hypothesis is based on the assumption that discourse on Islam in mainstream global media tends to lean toward the dangerous "clash of civilizations" frame. To test this hypothesis, I have developed a prototype tool "Said-Huntington Discourse Analyzer" that machine classifies news articles on a normative scale -- a scale that measures "clash of civilization" polarization in an article on the basis of conflict. The tool also extracts semantically meaningful conversations for a media source using Latent Dirichlet Allocation (LDA) topic modeling, allowing the users to discover frames of conversations on the basis of Said-Huntington index classification. I evaluated the classifier on human-classified articles and found that the accuracy of the classifier was very high (99.03%). Generally, text analysis tools uncover patterns and trends in the data without delineating the 'ideology' that permeates the text. The machine learning tool presented here classifies media discourse on Islam in terms of conflict and non-conflict, and attempts to put light on the 'ideology' that permeates the text. In addition, the tool provides textual analysis of news articles based on the CDA methodologies.
by Muhammad Ali Hashmi.
S.M.
McCoy, Mason Eugene. "A Twitter-Based Prediction Tool for Digital Currency". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2302.
Pełny tekst źródłaJakob, Persson. "How to annotate in video for training machine learning with a good workflow". Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187078.
Pełny tekst źródłaArtificiell intelligens och maskininlärning används inom många olika områden, ett av dessa områden är bildigenkänning. Vid produktionen av ett TV-program eller av en film kan bildigenkänning användas för att hjälpa redigerarna att hitta specifika objekt, scener eller personer i videoinnehållet, vilket påskyndar produktionen. Men bildigenkänningsprogram fungerar inte alltid helt perfekt och kan inte användas i produktionen av ett TV-program eller film som det är tänkt att användas i det sammanhanget. För att förbättra bildigenkänningsprogram så behöver dess algoritm tränas på stora datasets av bilder och labels. Men att skapa dessa datasets tar tid och det behövs program som kan skapa datasets och återträna algoritmer för bildigenkänning så att de fungerar bättre. Syftet med detta examensarbete var att undersöka om det var möjligt att skapa ett verktyg som kan markera(annotera) objekt och personer i video och använda datat som träningsdata för algoritmer. Men även att skapa ett verktyg som kan återträna algoritmer för bildigenkänning så att de blir bättre utifrån datat man får från ett bildigenkänningprogram. Det var också viktigt att dessa verktyg hade ett bra arbetsflöde för användarna. Studien bestod av en teoretisk studie för att få mer kunskap om annoteringar i video och hur man skapar bra UX-design med ett bra arbetsflöde. Intervjuer hölls också för att få mer kunskap om kraven på produkten och vilka som skulle använda den. Det resulterade i ett användarscenario och ett arbetsflöde som användes tillsammans med kunskapen från den teoretiska studien för att skapa en hi-fi prototyp, där en iterativ process med användbarhetstestning användes. Detta resulterade i en slutlig hi-fi prototyp med bra design och ett bra arbetsflöde för användarna där det är möjligt att markera(annotera) objekt och personer med en bounding box och där det är möjligt att återträna algoritmer för bildigenkänning som har körts på video.
Massaccesi, Luciano. "Machine Learning Software for Automated Satellite Telemetry Monitoring". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20502/.
Pełny tekst źródłaSpies, Lucas Daniel. "Machine-Learning based tool to predict Tire Noise using both Tire and Pavement Parameters". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91407.
Pełny tekst źródłaMaster of Science
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, machine learning algorithms, based on the human brain structure, have been implemented to model TPIN. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research focused on the study of the correct configuration of such machine learning algorithms applied to the very specific task of TPIN prediction. Moreover, a customized configuration showed improvements on the TPIN prediction capabilities of these algorithms. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement roughness when predicting TPIN. Finally, the new machine learning algorithm configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete computational tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed.