Дисертації з теми "Machine learning tools"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Machine learning tools.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Machine learning tools".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kanwar, John. "Smart cropping tools with help of machine learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74827.

Повний текст джерела
Анотація:
Machine learning has been around for a long time, the applications range from a big variety of different subjects, everything from self driving cars to data mining. When a person takes a picture with its mobile phone it easily happens that the photo is a little bit crooked. It does also happen that people takes spontaneous photos with help of their phones, which can result in something irrelevant ending up in the corner of the image. This thesis combines machine learning with photo editing tools. It will explore the possibilities how machine learning can be used to automatically crop images in an aesthetically pleasing way and how machine learning can be used to create a portrait cropping tool. It will also go through how a straighten out function can be implemented with help of machine learning. At last, it is going to compare this tools with other software automatic cropping tools.
Maskinlärning har funnits en lång tid. Deras jobb varierar från flera olika ämnen. Allting från självkörande bilar till data mining. När en person tar en bild med en mobiltelefon händer det lätt att bilden är lite sned. Det händer också att en tar spontana bilder med sin mobil, vilket kan leda till att det kommer med något i kanten av bilden som inte bör vara där. Det här examensarbetet kombinerar maskinlärning med fotoredigeringsverktyg. Det kommer att utforska möjligheterna hur maskinlärning kan användas för att automatiskt beskära bilder estetsikt tilltalande samt hur maskinlärning kan användas för att skapa ett porträttbeskärningsverktyg. Det kommer även att gå igenom hur en räta-till-funktion kan bli implementerad med hjälp av maskinlärning. Till sist kommer det att jämföra dessa verktyg med andra programs automatiska beskärningsverktyg.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nordin, Alexander Friedrich. "End to end machine learning workflow using automation tools." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119776.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
We have developed an open source library named Trane and integrated it with two open source libraries to build an end-to-end machine learning workflow that can facilitate rapid development of machine learning models. The three components of this workflow are Trane, Featuretools and ATM. Trane enumerates tens of prediction problems relevant to any dataset using the meta information about the data. Furthermore, Trane generates training examples required for training machine learning models. Featuretools is an open-source software for automatically generating features from a dataset. Auto Tune Models (ATM), an open source library, performs a high throughput search over modeling options to find the best modeling technique for a problem. We show the capability of these three tools and highlight the open-source development of Trane.
by Alexander Friedrich Nordin.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jalali, Mana. "Voltage Regulation of Smart Grids using Machine Learning Tools." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/95962.

Повний текст джерела
Анотація:
Smart inverters have been considered the primary fast solution for voltage regulation in power distribution systems. Optimizing the coordination between inverters can be computationally challenging. Reactive power control using fixed local rules have been shown to be subpar. Here, nonlinear inverter control rules are proposed by leveraging machine learning tools. The designed control rules can be expressed by a set of coefficients. These control rules can be nonlinear functions of both remote and local inputs. The proposed control rules are designed to jointly minimize the voltage deviation across buses. By using the support vector machines, control rules with sparse representations are obtained which decrease the communication between the operator and the inverters. The designed control rules are tested under different grid conditions and compared with other reactive power control schemes. The results show promising performance.
With advent of renewable energies into the power systems, innovative and automatic monitoring and control techniques are required. More specifically, voltage regulation for distribution grids with solar generation is a can be a challenging task. Moreover, due to frequency and intensity of the voltage changes, traditional utility-owned voltage regulation equipment are not useful in long term. On the other hand, smart inverters installed with solar panels can be used for regulating the voltage. Smart inverters can be programmed to inject or absorb reactive power which directly influences the voltage. Utility can monitor, control and sync the inverters across the grid to maintain the voltage within the desired limits. Machine learning and optimization techniques can be applied for automation of voltage regulation in smart grids using the smart inverters installed with solar panels. In this work, voltage regulation is addressed by reactive power control.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Viswanathan, Srinidhi. "ModelDB : tools for machine learning model management and prediction storage." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113540.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Building a machine learning model is often an iterative process. Data scientists train hundreds of models before finding a model that meets acceptable criteria. But tracking these models and remembering the insights obtained from them is an arduous task. In this thesis, we present two main systems for facilitating better tracking, analysis, and querying of scikit-learn machine learning models. First, we introduce our scikit-learn client for ModelDB, a novel end-to-end system for managing machine learning models. The client allows data scientists to easily track diverse scikit-learn workflows with minimal changes to their code. Then, we describe our extension to ModelDB, PredictionStore. While the ModelDB client enables users to track the different models they have run, PredictionStore creates a prediction matrix to tackle the remaining piece in the puzzle: facilitating better exploration and analysis of model performance. We implement a query API to assist in analyzing predictions and answering nuanced questions about models. We also implement a variety of algorithms to recommend particular models to ensemble utilizing the prediction matrix. We evaluate ModelDB and PredictionStore on different datasets and determine ModelDB successfully tracks scikit-learn models, and most complex model queries can be executed in a matter of seconds using our query API. In addition, the workflows demonstrate significant improvement in accuracy using the ensemble algorithms. The overall goal of this research is to provide a flexible framework for training scikit-learn models, storing their predictions/ models, and efficiently exploring and analyzing the results.
by Srinidhi Viswanathan.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Borodavkina, Lyudmila 1977. "Investigation of machine learning tools for document clustering and classification." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/8932.

Повний текст джерела
Анотація:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 57-59).
Data clustering is a problem of discovering the underlying data structure without any prior information about the data. The focus of this thesis is to evaluate a few of the modern clustering algorithms in order to determine their performance in adverse conditions. Synthetic Data Generation software is presented as a useful tool both for generating test data and for investigating results of the data clustering. Several theoretical models and their behavior are discussed, and, as the result of analysis of a large number of quantitative tests, we come up with a set of heuristics that describe the quality of clustering output in different adverse conditions.
by Lyudmila Borodavkina.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Song, Qi. "Developing machine learning tools to understand transcriptional regulation in plants." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93512.

Повний текст джерела
Анотація:
Abiotic stresses constitute a major category of stresses that negatively impact plant growth and development. It is important to understand how plants cope with environmental stresses and reprogram gene responses which in turn confers stress tolerance. Recent advances of genomic technologies have led to the generation of much genomic data for the model plant, Arabidopsis. To understand gene responses activated by specific external stress signals, these large-scale data sets need to be analyzed to generate new insight of gene functions in stress responses. This poses new computational challenges of mining gene associations and reconstructing regulatory interactions from large-scale data sets. In this dissertation, several computational tools were developed to address the challenges. In Chapter 2, ConSReg was developed to infer condition-specific regulatory interactions and prioritize transcription factors (TFs) that are likely to play condition specific regulatory roles. Comprehensive investigation was performed to optimize the performance of ConSReg and a systematic recovery of nitrogen response TFs was performed to evaluate ConSReg. In Chapter 3, CoReg was developed to infer co-regulation between genes, using only regulatory networks as input. CoReg was compared to other computational methods and the results showed that CoReg outperformed other methods. CoReg was further applied to identified modules in regulatory network generated from DAP-seq (DNA affinity purification sequencing). Using a large expression dataset generated under many abiotic stress treatments, many regulatory modules with common regulatory edges were found to be highly co-expressed, suggesting that target modules are structurally stable modules under abiotic stress conditions. In Chapter 4, exploratory analysis was performed to classify cell types for Arabidopsis root single cell RNA-seq data. This is a first step towards construction of a cell-type-specific regulatory network for Arabidopsis root cells, which is important for improving current understanding of stress response.
Doctor of Philosophy
Abiotic stresses constitute a major category of stresses that negatively impact plant growth and development. It is important to understand how plants cope with environmental stresses and reprogram gene responses which in turn confers stress tolerance to plants. Genomics technology has been used in past decade to generate gene expression data under different abiotic stresses for the model plant, Arabidopsis. Recent new genomic technologies, such as DAP-seq, have generated large scale regulatory maps that provide information regarding which gene has the potential to regulate other genes in the genome. However, this technology does not provide context specific interactions. It is unknown which transcription factor can regulate which gene under a specific abiotic stress condition. To address this challenge, several computational tools were developed to identify regulatory interactions and co-regulating genes for stress response. In addition, using single cell RNA-seq data generated from the model plant organism Arabidopsis, preliminary analysis was performed to build model that classifies Arabidopsis root cell types. This analysis is the first step towards the ultimate goal of constructing cell-typespecific regulatory network for Arabidopsis, which is important for improving current understanding of stress response in plants.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nagler, Dylan Jeremy. "SCHUBOT: Machine Learning Tools for the Automated Analysis of Schubert’s Lieder." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:12705172.

Повний текст джерела
Анотація:
This paper compares various methods for automated musical analysis, applying machine learning techniques to gain insight about the Lieder (art songs) of com- poser Franz Schubert (1797-1828). Known as a rule-breaking, individualistic, and adventurous composer, Schubert produced hundreds of emotionally-charged songs that have challenged music theorists to this day. The algorithms presented in this paper analyze the harmonies, melodies, and texts of these songs. This paper begins with an exploration of the relevant music theory and ma- chine learning algorithms (Chapter 1), alongside a general discussion of the place Schubert holds within the world of music theory. The focus is then turned to automated harmonic analysis and hierarchical decomposition of MusicXML data, presenting new algorithms for phrase-based analysis in the context of past research (Chapter 2). Melodic analysis is then discussed (Chapter 3), using unsupervised clustering methods as a complement to harmonic analyses. This paper then seeks to analyze the texts Schubert chose for his songs in the context of the songs’ relevant musical features (Chapter 4), combining natural language processing with feature extraction to pinpoint trends in Schubert’s career.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Parikh, Neena (Neena S. ). "Interactive tools for fantasy football analytics and predictions using machine learning." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100687.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-84).
The focus of this project is multifaceted: we aim to construct robust predictive models to project the performance of individual football players, and we plan to integrate these projections into a web-based application for in-depth fantasy football analytics. Most existing statistical tools for the NFL are limited to the use of macro-level data; this research looks to explore statistics at a finer granularity. We explore various machine learning techniques to develop predictive models for different player positions including quarterbacks, running backs, wide receivers, tight ends, and kickers. We also develop an interactive interface that will assist fantasy football participants in making informed decisions when managing their fantasy teams. We hope that this research will not only result in a well-received and widely used application, but also help pave the way for a transformation in the field of football analytics.
by Neena Parikh.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Arango, Argoty Gustavo Alonso. "Computational Tools for Annotating Antibiotic Resistance in Metagenomic Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88987.

Повний текст джерела
Анотація:
Metagenomics has become a reliable tool for the analysis of the microbial diversity and the molecular mechanisms carried out by microbial communities. By the use of next generation sequencing, metagenomic studies can generate millions of short sequencing reads that are processed by computational tools. However, with the rapid adoption of metagenomics a large amount of data has been generated. This situation requires the development of computational tools and pipelines to manage the data scalability, accessibility, and performance. In this thesis, several strategies varying from command line, web-based platforms to machine learning have been developed to address these computational challenges. Interpretation of specific information from metagenomic data is especially a challenge for environmental samples as current annotation systems only offer broad classification of microbial diversity and function. Therefore, I developed MetaStorm, a public web-service that facilitates customization of computational analysis for metagenomic data. The identification of antibiotic resistance genes (ARGs) from metagenomic data is carried out by searches against curated databases producing a high rate of false negatives. Thus, I developed DeepARG, a deep learning approach that uses the distribution of sequence alignments to predict over 30 antibiotic resistance categories with a high accuracy. Curation of ARGs is a labor intensive process where errors can be easily propagated. Thus, I developed ARGminer, a web platform dedicated to the annotation and inspection of ARGs by using crowdsourcing. Effective environmental monitoring tools should ideally capture not only ARGs, but also mobile genetic elements and indicators of co-selective forces, such as metal resistance genes. Here, I introduce NanoARG, an online computational resource that takes advantage of the long reads produced by nanopore sequencing technology to provide insights into mobility, co-selection, and pathogenicity. Sequence alignment has been one of the preferred methods for analyzing metagenomic data. However, it is slow and requires high computing resources. Therefore, I developed MetaMLP, a machine learning approach that uses a novel representation of protein sequences to perform classifications over protein functions. The method is accurate, is able to identify a larger number of hits compared to sequence alignments, and is >50 times faster than sequence alignment techniques.
Doctor of Philosophy
Antimicrobial resistance (AMR) is one of the biggest threats to human public health. It has been estimated that the number of deaths caused by AMR will surpass the ones caused by cancer on 2050. The seriousness of these projections requires urgent actions to understand and control the spread of AMR. In the last few years, metagenomics has stand out as a reliable tool for the analysis of the microbial diversity and the AMR. By the use of next generation sequencing, metagenomic studies can generate millions of short sequencing reads that are processed by computational tools. However, with the rapid adoption of metagenomics, a large amount of data has been generated. This situation requires the development of computational tools and pipelines to manage the data scalability, accessibility, and performance. In this thesis, several strategies varying from command line, web-based platforms to machine learning have been developed to address these computational challenges. In particular, by the development of computational pipelines to process metagenomics data in the cloud and distributed systems, the development of machine learning and deep learning tools to ease the computational cost of detecting antibiotic resistance genes in metagenomic data, and the integration of crowdsourcing as a way to curate and validate antibiotic resistance genes.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Schildt, Alexandra, and Jenny Luo. "Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.

Повний текст джерела
Анотація:
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems.
AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Madeo, Giovanni <1994&gt. "Machine learning tools for protein annotation: the cases of transmembrane β-barrel and myristoylated proteins". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10169/1/TesiMadeo.pdf.

Повний текст джерела
Анотація:
Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

BALDAZZI, GIULIA. "Advanced signal processing and machine learning tools for non-invasive foetal electrocardiography and intracardiac electrophysiology." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1082764.

Повний текст джерела
Анотація:
In the last decades, bioengineering research promoted the improvement in human health and wellbeing through the development, optimization and evaluation of innovative technologies and medical devices for both diagnosis and therapy. In this context, the exploitation of biomedical technology advances plays a key role in the study and treatment of heart disorders. This PhD thesis focuses on two main application areas: on one hand, foetal cardiac physiology and electrocardiography and, on the other, intracardiac electrophysiology, substrate mapping and radiofrequency ablation. There, it aims at providing new instruments and insights to improve the knowledge and go beyond the current state of the art by the development of novel signal processing and machine learning tools that aim at supporting the diagnosis and treatment of cardiac diseases. Non-invasive foetal ECG (fECG) is a long-standing niche research topic characterized by the continuous demand of improved solutions to solve the problem of recovering high-quality fECG signals from non-invasive trans-abdominal recordings. This PhD thesis focused on the development of algorithms for non-invasive fECG extraction and enhancement. Specifically, in collaboration with the Prof. Hau-Tieng Wu (Department of Mathematics and Statistical Science, Duke University, Durham, NC, USA), a novel algorithm for the extraction of morphologically preserved multi-channel fECG signals was conceived. Furthermore, wavelet denoising was deeply investigated for the post-processing of the fECG recordings, to quantitatively evaluate the noise-removal and morphology-preservation effects of different wavelet denoising approaches, expressly tailored for this application domain. Intracardiac electrophysiology is a branch of interventional cardiology aimed at the diagnosis and treatment of arrhythmias by catheter-based techniques exploiting electroanatomic substrate mapping and ablation. In this exciting scenario, this PhD thesis focused on post-ischemic ventricular tachycardia, which is a life-threatening arrhythmia. Being the electrophysiological studies and ablations very time-consuming and operator-dependent, the first applied-research goal was the development of an effective tool able to support clinical experts in the recognition of the ablation targets during clinical procedures. Moreover, a detailed spectral characterization of post-ischaemic signals was performed, thus paving the way to the development of novel approaches in terms of advanced signal analysis, automatic recognition of the arrhythmogenic substrates, study of the substrate and, in general, to a deeper understanding of the arrhythmogenic mechanisms. Beyond the scientific content, this PhD thesis gives an important contribution from an industrial perspective in both fields. In fact, automated signal processing tools for the non-invasive fECG signals can improve the detection capabilities of current tools, to be clinically exploited for low-cost antenatal screening. At the same time, novel methods for ablation targets recognition in cardiac electrophysiology could be embedded in future medical electroanatomic mapping systems as plug-in to enhance current computer-aided methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Paiano, Michele. "Sperimentazione di tools per la creazione e l'addestramento di Generative Adversarial Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Знайти повний текст джерела
Анотація:
Soggetto principale di questo lavoro saranno le reti neurali artificiali, ed in particolare una classe di modelli generativi detti ”Generative adversarial Net-works”. Questi rappresentano un trampolino di lancio verso la costruzione di sistemi di Intelligenza Artificiale in grado di consumare dati grezzi provenienti dal mondo reale e automaticamente estrarne una comprensione che rappresenta la struttura intrinseca del mondo. Questo costituisce un grande passo avanti rispetto ai sistemi usati in passato, che erano in grado di apprendere da dati di addestramento accuratamente pre-etichettati da esseri umani competenti. Lo scopo finale di questa tesi non è quello di esibire dei risultati allo stato dell’arte, bensı̀ di fornire una trattazione accurata e descrivere le varie fasi di progettazione e realizzazione di una semplice generative adversarial networks utilizzando moderni strumenti di sviluppo in ambito di machine learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jarvis, Matthew P. "Applying machine learning techniques to rule generation in intelligent tutoring systems." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-112724.

Повний текст джерела
Анотація:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dubey, Anshul. "Search and Analysis of the Sequence Space of a Protein Using Computational Tools." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14115.

Повний текст джерела
Анотація:
A new approach to the process of Directed Evolution is proposed, which utilizes different machine learning algorithms. Directed Evolution is a process of improving a protein for catalytic purposes by introducing random mutations in its sequence to create variants. Through these mutations, Directed Evolution explores the sequence space, which is defined as all the possible sequences for a given number of amino acids. Each variant sequence is divided into one of two classes, positive or negative, according to their activity or stability. By employing machine learning algorithms for feature selection on the sequence of these variants of the protein, attributes or amino acids in its sequence important for the classification into positive or negative, can be identified. Support Vector Machines (SVMs) were utilized to identify the important individual amino acids for any protein, which have to be preserved to maintain its activity. The results for the case of beta-lactamase show that such residues can be identified with high accuracy while using a small number of variant sequences. Another class of machine learning problems, Boolean Learning, was used to extend this approach to identifying interactions between the different amino acids in a proteins sequence using the variant sequences. It was shown through simulations that such interactions can be identified for any protein with a reasonable number of variant sequences. For experimental verification of this approach, two fluorescent proteins, mRFP and DsRed, were used to generate variants, which were screened for fluorescence. Using Boolean Learning, an interacting pair was identified, which was shown to be important for the fluorescence. It was also shown through experiments and simulations that knowing such pairs can increase the fraction active variants in the library. A Boolean Learning algorithm was also developed for this application, which can learn Boolean functions from data in the presence of classification noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Krishnan, Vidhya Gomathi. "Novel approaches to predict the effect of single nucleotide polymorphisms on protein function using machine learning tools." Thesis, University of Leeds, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Kai. "DEVELOPMENT OF MACHINE LEARNING BASED BIOINFORMATICS TOOLS FORCRISPR DETECTION, PIRNA IDENTIFICATION, AND WHOLE-GENOME BISULFITESEQUENCING DATA ANALYSIS." Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1546437447863901.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Rampton, Travis Michael. "Deformation Twin Nucleation and Growth Characterization in Magnesium Alloys Using Novel EBSD Pattern Analysis and Machine Learning Tools." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4451.

Повний текст джерела
Анотація:
Deformation twinning in Magnesium alloys both facilitates slip and forms sites for failure. Currently, basic studies of twinning in Mg are facilitated by electron backscatter diffraction (EBSD) which is able to extract a myriad of information relating to crystalline microstructures. Although much information is available via EBSD, various problems relating to deformation twinning have not been solved. This dissertation provides new insights into deformation twinning in Mg alloys, with particular focus on AZ31. These insights were gained through the development of new EBSD and related machine learning tools that extract more information beyond what is currently accessed.The first tool relating to characterization of deformed and twinned materials focuses on surface topography crack detection. The intensity map across EBSD images contains vital information that can be used to detect evolution of surface roughness and crack formation, which typically occurs at twin boundaries. The method of topography recovery resulted in reconstruction errors as low as 2% over a 500 μm length. The method was then applied to a 3 μm x 3 μm area of twinned Tantalum which experienced topographic alterations. The topography of Ta correlated with other measured changes in the microstructure. Additionally, EBSD images were used to identify the presence of cracks in Nickel microstructures. Several cracks were identified on the Ni specimen, demonstrating that cracks as thin as 34 nm could be measured.A further EBSD based tool developed for this study was used to identify thin compression twins in Mg; these are often missed in a traditional EBSD scan due to their size relative to the electron probe. This tool takes advantage of crystallographic relationships that exist between parent and twinned grains; common planes that exist in both grains lead to bands of consistent intensity as a scan crosses a twin. Hence, twin boundaries in a microstructure can be recognized, even when they are associated with thin twins. Proof of concept was performed on known twins in Inconel 600, Tantalum, and Magnesium AZ31. This method was then used to search for undetected twins in a Mg AZ31 structure, revealing nearly double the number of twins compared with those initially measured by standard procedures.To uncover the driving forces behind deformation twinning in Mg, a machine learning framework was developed to leverage all of the data available from EBSD and use that to create a physics based models of twin nucleation and growth. The resultant models for nucleation and growth were measured to be up to 86.5% and 96.1% accurate respectively. Each model revealed a unique combination of crystallographic attributes that affected twinning in the AZ31.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Fredriksson, Franzén Måns, and Nils Tyrén. "Anomaly detection for automated security log analysis : Comparison of existing techniques and tools." Thesis, Linköpings universitet, Databas och informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177728.

Повний текст джерела
Анотація:
Logging security-related events is becoming increasingly important for companies. Log messages can be used for surveillance of a system or to make an assessment of the dam- age caused in the event of, for example, an infringement. Typically, large quantities of log messages are produced making manual inspection for finding traces of unwanted activity quite difficult. It is therefore desirable to be able to automate the process of analysing log messages. One way of finding suspicious behavior within log files is to set up rules that trigger alerts when certain log messages fit the criteria. However, this requires prior knowl- edge about the system and what kind of security issues that can be expected. Meaning that any novel attacks will not be detected with this approach. It can also be very difficult to determine what normal behavior and abnormal behavior is. A potential solution to this problem is machine learning and anomaly-based detection. Anomaly detection is the pro- cess of finding patterns which do not behave like defined notion of normal behavior. This thesis examines the process of going from raw log data to finding anomalies. Both existing log analysis tools and the creation of our own proof-of-concept implementation are used for the analysis. With the use of labeled log data, our implementation was able to reach a precision of 73.7% and a recall of 100%. The advantages and disadvantages of creating our own implementation as opposed to using an existing tool is presented and discussed along with several insights from the field of anomaly detection for log analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

DESSI', DANILO. "Knowledge Extraction from Textual Resources through Semantic Web Tools and Advanced Machine Learning Algorithms for Applications in Various Domains." Doctoral thesis, Università degli Studi di Cagliari, 2020. http://hdl.handle.net/11584/285376.

Повний текст джерела
Анотація:
Nowadays there is a tremendous amount of unstructured data, often represented by texts, which is created and stored in variety of forms in many domains such as patients' health records, social networks comments, scientific publications, and so on. This volume of data represents an invaluable source of knowledge, but unfortunately it is challenging its mining for machines. At the same time, novel tools as well as advanced methodologies have been introduced in several domains, improving the efficacy and the efficiency of data-based services. Following this trend, this thesis shows how to parse data from text with Semantic Web based tools, feed data into Machine Learning methodologies, and produce services or resources to facilitate the execution of some tasks. More precisely, the use of Semantic Web technologies powered by Machine Learning algorithms has been investigated in the Healthcare and E-Learning domains through not yet experimented methodologies. Furthermore, this thesis investigates the use of some state-of-the-art tools to move data from texts to graphs for representing the knowledge contained in scientific literature. Finally, the use of a Semantic Web ontology and novel heuristics to detect insights from biological data in form of graph are presented. The thesis contributes to the scientific literature in terms of results and resources. Most of the material presented in this thesis derives from research papers published in international journals or conference proceedings.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Brown, Ryan Charles. "Development of Ground-Level Hyperspectral Image Datasets and Analysis Tools, and their use towards a Feature Selection based Sensor Design Method for Material Classification." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84944.

Повний текст джерела
Анотація:
Visual sensing in robotics, especially in the context of autonomous vehicles, has advanced quickly and many important contributions have been made in the areas of target classification. Typical to these studies is the use of the Red-Green-Blue (RGB) camera. Separately, in the field of remote sensing, the hyperspectral camera has been used to perform classification tasks on natural and man-made objects from typically aerial or satellite platforms. Hyperspectral data is characterized by a very fine spectral resolution, resulting in a significant increase in the ability to identify materials in the image. This hardware has not been studied in the context of autonomy as the sensors are large, expensive, and have non-trivial image capture times. This work presents three novel contributions: a Labeled Hyperspectral Image Dataset (LHID) of ground-level, outdoor objects based on typical scenes that a vehicle or pedestrian may encounter, an open-source hyperspectral interface software package (HSImage), and a feature selection based sensor design algorithm for object detection sensors (DLSD). These three contributions are novel and useful in the fields of hyperspectral data analysis, visual sensor design, and hyperspectral machine learning. The hyperspectral dataset and hyperspectral interface software were used in the design and testing of the sensor design algorithm. The LHID is shown to be useful for machine learning tasks through experimentation and provides a unique data source for hyperspectral machine learning. HSImage is shown to be useful for manipulating, labeling and interacting with hyperspectral data, and allows wavelength and classification based data retrieval, storage of labeling information and ambient light data. DLSD is shown to be useful for creating wavelength bands for a sensor design that increase the accuracy of classifiers trained on data from the LHID. DLSD shows accuracy near that of the full spectrum hyperspectral data, with a reduction in features on the order of 100 times. It compared favorably to other state-of-the-art wavelength feature selection techniques and exceeded the accuracy of an RGB sensor by 10%.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lee, Ji Hyun. "Development of a Tool to Assist the Nuclear Power Plant Operator in Declaring a State of Emergency Based on the Use of Dynamic Event Trees and Deep Learning Tools." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fuentes, Antonio. "Proactive Decision Support Tools for National Park and Non-Traditional Agencies in Solving Traffic-Related Problems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88727.

Повний текст джерела
Анотація:
Transportation Engineers have recently begun to incorporate statistical and machine learning approaches to solving difficult problems, mainly due to the vast quantities of data collected that is stochastic (sensors, video, and human collected). In transportation engineering, a transportation system is often denoted by jurisdiction boundaries and evaluated as such. However, it is ultimately defined by the consideration of the analyst in trying to answer the question of interest. In this dissertation, a transportation system located in Jackson, Wyoming under the jurisdiction of the Grand Teton National Park and recognized as the Moose-Wilson Corridor is evaluated to identify transportation-related factors that influence its operational performance. The evaluation considers its unique prevalent conditions and takes into account future management strategies. The dissertation accomplishes this by detailing four distinct aspects in individual chapters; each chapter is a standalone manuscript with detailed introduction, purpose, literature review, findings, and conclusion. Chapter 1 provides a general introduction and provides a summary of Chapters 2 – 6. Chapter 2 focuses on evaluating the operational performance of the Moose-Wilson Corridor's entrance station, where queueing performance and arrival and probability mass functions of the vehicle arrival rates are determined. Chapter 3 focuses on the evaluation of a parking system within the Moose-Wilson Corridor in a popular attraction known as the Laurance S. Rockefeller Preserve, in which the system's operational performance is evaluated, and a probability mass function under different arrival and service rates are provided. Chapter 4 provides a data science approach to predicting the probability of vehicles stopping along the Moose-Wilson Corridor. The approach is a machine learning classification methodology known as "decision tree." In this study, probabilities of stopping at attractions are predicted based on GPS tracking data that include entrance location, time of day and stopping at attractions. Chapter 5 considers many of the previous findings, discusses and presents a developed tool which utilizes a Bayesian methodology to determine the posterior distributions of observed arrival rates and service rates which serve as bounds and inputs to an Agent-Based Model. The Agent-Based Model represents the Moose-Wilson Corridor under prevailing conditions and considers some of the primary operational changes in Grand Teton National Park's comprehensive management plan for the Moose-Wilson Corridor. The implementation of an Agent-Based Model provides a flexible platform to model multiple aspects unique to a National Park, including visitor behavior and its interaction with wildlife. Lastly, Chapter 6 summarizes and concludes the dissertation.
Doctor of Philosophy
In this dissertation, a transportation system located in Jackson, Wyoming under the jurisdiction of the Grand Teton National Park and recognized as the Moose-Wilson Corridor is evaluated to identify transportation-related factors that influence its operational performance. The evaluation considers its unique prevalent conditions and takes into account future management strategies. Furthermore, emerging analytical strategies are implemented to identify and address transportation system operational concerns. Thus, in this dissertation, decision support tools for the evaluation of a unique system in a National Park are presented in four distinct manuscripts. The manuscripts cover traditional approaches that breakdown and evaluate traffic operations and identify mitigation strategies. Additionally, emerging strategies for the evaluation of data with machine learning approaches are implemented on GPS-tracks to determine vehicles stopping at park attractions. Lastly, an agent-based model is developed in a flexible platform to utilize previous findings and evaluate the Moose-Wilson corridor while considering future policy constraints and the unique natural interactions between visitors and prevalent ecological and wildlife.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tang, Danny M. Eng Massachusetts Institute of Technology. "Empowering novices to understand and use machine learning with personalized image classification models, intuitive analysis tools, and MIT App Inventor." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123130.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 129-131).
As machine learning permeates our society and manifests itself through commonplace technologies such as autonomous vehicles, facial recognition, and online store recommendations, it is necessary that the increasing number of people who rely on these tools understand how they work. As such, we need to develop effective tools and curricula for introducing machine learning to novices. My work focuses on teaching core machine learning concepts with image classification, one of the most basic and widespread examples of machine learning. I built a web interface that allows users to train and test personalized image classification models on pictures taken with their computers--webcams. Furthermore, I built an extension for MIT App Inventor, a platform for building mobile applications using a blocks-based programming language, that allows users to use the models they built in the web interface to classify objects in their mobile applications. Finally, I created high school level curricula for workshops based on using the aforementioned interface and App Inventor extension, and ran the workshops with two classes of high school students from Boston Latin Academy. My findings indicate that high school students with no machine learning background are able to learn and understand general concepts and applications of machine learning through hands-on, non-technical activities, as well as successfully utilize models they built for personal use.
by Danny Tang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
25

May, Madeth. "Using tracking data as reflexive tools to support tutors and learners in distance learning situations : an application to computer-mediated communications." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0106/these.pdf.

Повний текст джерела
Анотація:
Afin de fournir du soutien aux apprenants et aux tuteurs pendant et après leurs activités de communication médiatisée (CMC en anglais pour Computer-Mediated Communication), nous menons une recherche qui se focalise sur le traçage et l’exploitation des traces CMC en situations d’apprentissage à distance. Nous avons traité des problématiques autour de (i) la collecte des traces, (ii) la représentation des traces, et (iii) l’analyse et la visualisation des traces. Les éléments de réponse à ces problématiques sont : - Une approche de traçage pour les outils CMC : Pour tracer finement les activités CMC, nous proposons d’effectuer l’observation non seulement du côté des serveurs, mais également sur les postes clients. Nous distinguons 3 types d’interactions : les IHM (Interaction Homme-Machine), les IHHM (Interactions Homme-Hommes Médiatisées par les machines), et les IMM (Interactions Machine-Machines) et 2 types d’actions : les AU (Action utilisateur en dehors de l’environnement informatique et non médiatisée) et les AM (Action Machine sans action de l’utilisateur). - Modèle général de traces CMC : Le modèle général permet la représentation et structuration des traces CMC dans un format numérique commun indépendant des outils de communication. - Plate-forme TrAVis : Nous avons développé la plate-forme TrAVis (Tracking Data Analysis and Visualization) permettant aux tuteurs et apprenants d’analyser et de visualiser en temps réel les traces CMC. TrAVis a pour objectif d’assister les tuteurs dans les tâches de suivi et d’évaluation des activités collectives des apprenants en leur proposant différents outils pour construire et visualiser des indicateurs d’interactions
This research effort focuses particularly on the traces of synchronous and asynchronous interactions on Computer-Mediated Communication tools (CMC), in situations of discussions, negotiations and arguments among learners. The main objective is to study how to use the collected traces to design "reflexive tools" for the participants in the learning process. Reflexive tools refer to the useful data indicators computed from the collected traces that support the participants in terms of awareness, assessment and evaluation of their CMC activities. We explored different tracking approaches and their limitations regarding traces collection, traces structuring, and traces visualization. To improve upon these limitations, we have proposed (i) an explicit tracking approach to efficiently track the CMC activities, (ii) a generic model of CMC traces to answer to the problems of CMC traces structuring, interoperability and reusability, and (iii) a platform TrAVis (Tracking Data Analysis and Visualization tools), specifically designed and developed to assist the participants, both the tutors and learners in the task of exploiting the CMC traces. Another crucial part of this research is the design of data indicators. The main objective is to propose different sets of data indicators in graphical representations in order to enhance the visualization and analysis of the information of CMC activities. Three case studies and an experiment in an authentic learning situation have been conducted during this research work to evaluate the technical aspects of the tracking approach and the utility of TrAVis according to the pedagogical and learning objectives of the participants
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kruczyk, Marcin. "Rule-Based Approaches for Large Biological Datasets Analysis : A Suite of Tools and Methods." Doctoral thesis, Uppsala, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-206137.

Повний текст джерела
Анотація:
This thesis is about new and improved computational methods to analyze complex biological data produced by advanced biotechnologies. Such data is not only very large but it also is characterized by very high numbers of features. Addressing these needs, we developed a set of methods and tools that are suitable to analyze large sets of data, including next generation sequencing data, and built transparent models that may be interpreted by researchers not necessarily expert in computing. We focused on brain related diseases. The first aim of the thesis was to employ the meta-server approach to finding peaks in ChIP-seq data. Taking existing peak finders we created an algorithm that produces consensus results better than any single peak finder. The second aim was to use supervised machine learning to identify features that are significant in predictive diagnosis of Alzheimer disease in patients with mild cognitive impairment. This experience led to a development of a better feature selection method for rough sets, a machine learning method.  The third aim was to deepen the understanding of the role that STAT3 transcription factor plays in gliomas. Interestingly, we found that STAT3 in addition to being an activator is also a repressor in certain glioma rat and human models. This was achieved by analyzing STAT3 binding sites in combination with epigenetic marks. STAT3 regulation was determined using expression data of untreated cells and cells after JAK2/STAT3 inhibition. The four papers constituting the thesis are preceded by an exposition of the biological, biotechnological and computational background that provides foundations for the papers. The overall results of this thesis are witness of the mutually beneficial relationship played by Bioinformatics in modern Life Sciences and Computer Science.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Torabi, Moghadam Behrooz. "Computational discovery of DNA methylation patterns as biomarkers of ageing, cancer, and mental disorders : Algorithms and Tools." Doctoral thesis, Uppsala universitet, Institutionen för cell- och molekylärbiologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-320720.

Повний текст джерела
Анотація:
Epigenetics refers to the mitotically heritable modifications in gene expression without a change in the genetic code. A combination of molecular, chemical and environmental factors constituting the epigenome is involved, together with the genome, in setting up the unique functionality of each cell type. DNA methylation is the most studied epigenetic mark in mammals, where a methyl group is added to the cytosine in a cytosine-phosphate-guanine dinucleotides or a CpG site. It has been shown to have a major role in various biological phenomena such as chromosome X inactivation, regulation of gene expression, cell differentiation, genomic imprinting. Furthermore, aberrant patterns of DNA methylation have been observed in various diseases including cancer. In this thesis, we have utilized machine learning methods and developed new methods and tools to analyze DNA methylation patterns as a biomarker of ageing, cancer subtyping and mental disorders. In Paper I, we introduced a pipeline of Monte Carlo Feature Selection and rule-base modeling using ROSETTA in order to identify combinations of CpG sites that classify samples in different age intervals based on the DNA methylation levels. The combination of genes that showed up to be acting together, motivated us to develop an interactive pathway browser, named PiiL, to check the methylation status of multiple genes in a pathway. The tool enhances detecting differential patterns of DNA methylation and/or gene expression by quickly assessing large data sets. In Paper III, we developed a novel unsupervised clustering method, methylSaguaro, for analyzing various types of cancers, to detect cancer subtypes based on their DNA methylation patterns. Using this method we confirmed the previously reported findings that challenge the histological grouping of the patients, and proposed new subtypes based on DNA methylation patterns. In Paper IV, we investigated the DNA methylation patterns in a cohort of schizophrenic and healthy samples, using all the methods that were introduced and developed in the first three papers.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Quaranta, Giacomo. "Efficient simulation tools for real-time monitoring and control using model order reduction and data-driven techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667474.

Повний текст джерела
Анотація:
Numerical simulation, the use of computers to run a program which implements a mathematical model for a physical system, is an important part of today technological world. It is required in many scientific and engineering fields to study the behaviour of systems whose mathematical models are too complex to provide analytical solutions and it makes virtual evaluation of systems responses possible (virtual twins). This drastically reduces the number of experimental tests for accurate designs of the real system that the numerical model represents. However these virtual twins, based on classical methods which make use of a rich representations of the system (ex. finite element method), rarely allows real-time feedback, even when considering high performance computing, operating on powerful platforms. In these circumstances, the real-time performance required in some applications are compromised. Indeed the virtual twins are static, that is, they are used in the design of complex systems and their components, but they are not expected to accommodate or assimilate data so as to define dynamic data-driven application systems. Moreover significant deviations between the observed response and the one predicted by the model are usually noticed due to inaccuracy in the employed models, in the determination of the model parameters or in their time evolution. In this thesis we propose different methods to solve these handicaps in order to perform real-time monitoring and control. In the first part Model Order Reduction (MOR) techniques are used to accommodate real-time constraints; they compute a good approximation of the solution by simplifying the solution procedure instead of the model. The accuracy of the predicted solution is not compromised and efficient simulations can be performed (digital twins). In the second part Data-Driven modelling are employed to fill the gap between the parametric solution computed by using non-intrusive MOR techniques and the measured fields, in order to make dynamic data-driven application systems, DDDAS, possible (Hybrid Twins).
La simulación numérica, el uso de ordenadores para ejecutar un programa que implementa un modelo matemático de un sistema físico, es una parte importante del mundo tecnológico actual. En muchos campos de la ciencia y la ingeniería es necesario estudiar el comportamiento de sistemas cuyos modelos matemáticos son demasiado complejos para proporcionar soluciones analíticas, haciendo posible la evaluación virtual de las respuestas de los sistemas (gemelos virtuales). Esto reduce drásticamente el número de pruebas experimentales para los diseños precisos del sistema real que el modelo numérico representa. Sin embargo, estos gemelos virtuales, basados en métodos clásicos que hacen uso de una rica representación del sistema (por ejemplo, el método de elementos finitos), rara vez permiten la retroalimentación en tiempo real, incluso cuando se considera la computación en plataformas de alto rendimiento. En estas circunstancias, el rendimiento en tiempo real requerido en algunas aplicaciones se ve comprometido. En efecto, los gemelos virtuales son estáticos, es decir, se utilizan en el diseño de sistemas complejos y sus componentes, pero no se espera que acomoden o asimilen los datos para definir sistemas de aplicación dinámicos basados en datos. Además, se suelen apreciar desviaciones significativas entre la respuesta observada y la predicha por el modelo, debido a inexactitudes en los modelos empleados, en la determinación de los parámetros del modelo o en su evolución temporal. En esta tesis se proponen diferentes métodos para resolver estas limitaciones con el fin de realizar un seguimiento y un control en tiempo real. En la primera parte se utilizan técnicas de Reducción de Modelos para satisfacer las restricciones en tiempo real; estas técnicas calculan una buena aproximación de la solución simplificando el procedimiento de resolución en lugar del modelo. La precisión de la solución no se ve comprometida y se pueden realizar simulaciones efficientes (gemelos digitales). En la segunda parte se emplea la modelización basada en datos para llenar el vacío entre la solución paramétrica, calculada utilizando técnicas de reducción de modelos no intrusivas, y los campos medidos, con el fin de hacer posibles los sistemas de aplicación dinámicos basados en datos (gemelos híbridos).
La simulation numérique, c'est-à-dire l'utilisation des ordinateurs pour exécuter un programme qui met en oeuvre un modèle mathématique d'un système physique, est une partie importante du monde technologique actuel. Elle est nécessaire dans de nombreux domaines scientifiques et techniques pour étudier le comportement de systèmes dont les modèles mathématiques sont trop complexes pour fournir des solutions analytiques et elle rend possible l'évaluation virtuelle des réponses des systèmes (jumeaux virtuels). Cela réduit considérablement le nombre de tests expérimentaux nécessaires à la conception précise du système réel que le modèle numérique représente. Cependant, ces jumeaux virtuels, basés sur des méthodes classiques qui utilisent une représentation fine du système (ex. méthode des éléments finis), permettent rarement une rétroaction en temps réel, même dans un contexte de calcul haute performance, fonctionnant sur des plates-formes puissantes. Dans ces circonstances, les performances en temps réel requises dans certaines applications sont compromises. En effet, les jumeaux virtuels sont statiques, c'est-à-dire qu'ils sont utilisés dans la conception de systèmes complexes et de leurs composants, mais on ne s'attend pas à ce qu'ils prennent en compte ou assimilent des données afin de définir des systèmes d'application dynamiques pilotés par les données. De plus, des écarts significatifs entre la réponse observée et celle prévue par le modèle sont généralement constatés en raison de l'imprécision des modèles employés, de la détermination des paramètres du modèle ou de leur évolution dans le temps. Dans cette thèse, nous proposons di érentes méthodes pour résoudre ces handicaps afin d'effectuer une surveillance et un contrôle en temps réel. Dans la première partie, les techniques de Réduction de Modèles sont utilisées pour tenir compte des contraintes en temps réel ; elles calculent une bonne approximation de la solution en simplifiant la procédure de résolution plutôt que le modèle. La précision de la solution n'est pas compromise et des simulations e caces peuvent être réalisées (jumeaux numériquex). Dans la deuxième partie, la modélisation pilotée par les données est utilisée pour combler l'écart entre la solution paramétrique calculée, en utilisant des techniques de réduction de modèles non intrusives, et les champs mesurés, afin de rendre possibles des systèmes d'application dynamiques basés sur les données (jumeaux hybrides).
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rauschenberger, Maria. "Early screening of dyslexia using a language-independent content game and machine learning." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667692.

Повний текст джерела
Анотація:
Els nens amb dislèxia tenen dificultats per aprendre a llegir i escriure. Sovint se'ls diagnostica després de fallar a l'escola, encara que la dislèxia no estigui relacionada amb la intel·ligència general. En aquesta tesi, presentem un enfocament per a la selecció prèvia de la dislèxia mitjançant un joc independent del llenguatge en combinació amb models d’aprenentatge automàtic formats amb les dades d’interacció. Abans volem dir abans que els nens aprenguin a llegir i escriure. Per assolir aquest objectiu, vam dissenyar el contingut del joc amb el coneixement de l'anàlisi de paraules d'error de persones amb dislèxia en diferents idiomes i els paràmetres relacionats amb la dislèxia com la percepció auditiva i la percepció visual. Amb els nostres dos jocs dissenyats (MusVis i DGames) vam recollir conjunts de dades (313 i 137 participants) en diferents idiomes (principalment espanyols i alemanys) i els vam avaluar amb classificadors d'aprenentatge automàtic. Per a MusVis utilitzem principalment contingut que fa referència a un únic indicador acústic o visual, mentre que el contingut de DGames fa referència a diversos indicadors (també contingut genèric). El nostre mètode proporciona una precisió de 0,74 per a l'alemany i 0,69 per a espanyol i una puntuació de F1 de 0,75 per a alemany i de 0,75 per a espanyol a MusVis quan s'utilitzen arbres extraestats. DGames es va avaluar principalment amb alemany i obté la màxima precisió de 0,67 i la màxima puntuació de F1 de 0,74. Els nostres resultats obren la possibilitat de la dislèxia de detecció precoç a baixos costos ia través del web.
Children with dyslexia have difficulties in learning how to read and write. They are often diagnosed after they fail in school, even though dyslexia is not related to general intelligence. In this thesis, we present an approach for earlier screening of dyslexia using a language-independent game in combination with machine learning models trained with the interaction data. By earlier we mean before children learn how to read and write. To reach this goal, we designed the game content with knowledge of the analysis of word errors from people with dyslexia in different languages and the parameters reported to be related to dyslexia, such as auditory and visual perception. With our two designed games (MusVis and DGames) we collected data sets (313 and 137 participants) in different languages (mainly Spanish and German) and evaluated them with machine learning classifiers. For MusVis we mainly use content that refers to one single acoustic or visual indicator, while DGames content refers to generic content related to various indicators. Our method provides an accuracy of 0.74 for German and 0.69 for Spanish and F1-scores of 0.75 for German and 0.75 for Spanish in MusVis when Random Forest and Extra Trees are used in . DGames was mainly evaluated with German and reached a peak accuracy of 0.67 and a peak F1-score of 0.74. Our results open the possibility of low-cost and early screening of dyslexia through the Web.
Los niños con dislexia tienen dificultades para aprender a leer y escribir. A menudo se les diagnostica después de fracasar en la escuela, incluso aunque la dislexia no está relacionada con la inteligencia general. En esta tesis, presentamos un enfoque para la detección temprana de la dislexia utilizando un juego independiente del idioma en combinación con modelos de aprendizaje automático entrenados con los datos de la interacción. Temprana aquí significa antes que los niños aprenden a leer y escribir. Para alcanzar este objetivo, diseñamos el contenido del juego con el conocimiento del análisis de las palabras de error de las personas con dislexia en diferentes idiomas y los parámetros reportados relacionados con la dislexia, tales como la percepción auditiva y la percepción visual. Con nuestros dos juegos diseñados (MusVis y DGames) recogimos conjuntos de datos (313 y 137 participantes) en diferentes idiomas (principalmente español y alemán) y los evaluamos con clasificadores de aprendizaje automático. Para MusVis utilizamos principalmente contenido que se refiere a un único indicador acústico o visual, mientras que el contenido de DGames se refiere a varios indicadores (también contenido genérico). Nuestro método proporciona una exactitud de 0,74 para alemán y 0,69 para español y una puntuación F1 de 0,75 para alemán y 0,75 para español en MusVis cuando se utilizan Random Forest y Extra Trees. DGames fue evaluado principalmente con alemán y obtiene una exactitud de 0,67 y una puntuación F1 de 0,74. Nuestros resultados abren la posibilidad de una detección precoz y de bajo coste de la dislexia a través de la Web
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Cambe, Jordan. "Understanding the complex dynamics of social systems with diverse formal tools." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN043/document.

Повний текст джерела
Анотація:
Au cours des deux dernières décennies les objets connectés ont révolutionné la traçabilité des phénomènes sociaux. Les trajectoires sociales laissent aujourd'hui des traces numériques, qui peuvent être analysées pour obtenir une compréhension plus profonde des comportements collectifs. L'essor de grands réseaux sociaux (comme Facebook, Twitter et plus généralement les réseaux de communication mobile) et d'infrastructures connectées (comme les réseaux de transports publiques et les plate-formes en ligne géolocalisées) ont permis la constitution de grands jeux de données temporelles. Ces nouveaux jeux de données nous donnent l'occasion de développer de nouvelles méthodes pour analyser les dynamiques temporelles de et dans ces systèmes.De nos jours, la pluralité des données nécessite d'adapter et combiner une pluralité de méthodes déjà existantes pour élargir la vision globale que l'on a de ces systèmes complexes. Le but de cette thèse est d'explorer les dynamiques des systèmes sociaux au moyen de trois groupes d'outils : les réseaux complexes, la physique statistique et l'apprentissage automatique. Dans cette thèse je commencerai par donner quelques définitions générales et un contexte historique des méthodes mentionnées ci-dessus. Après quoi, nous montrerons la dynamique complexe d'un modèle de Schelling suite à l'introduction d'une quantité infinitésimale de nouveaux agents et discuterons des limites des modèles statistiques. Le troisième chapitre montre la valeur ajoutée de l'utilisation de jeux de données temporelles. Nous étudions l'évolution du comportement des utilisateurs d'un réseau de vélos en libre-service. Puis, nous analysons les résultats d'un algorithme d'apprentissage automatique non supervisé ayant pour but de classer les utilisateurs en fonction de leurs profils. Le quatrième chapitre explore les différences entre une méthode globale et une méthode locale de détection de communautés temporelles sur des réseaux scientométriques. Le dernier chapitre combine l'analyse de réseaux complexes et l'apprentissage automatique supervisé pour décrire et prédire l'impact de l'introduction de nouveaux commerces sur les commerces existants. Nous explorons l'évolution temporelle de l'impact et montrons le bénéfice de l'utilisation de mesures de topologies de réseaux avec des algorithmes d'apprentissage automatique
For the past two decades, electronic devices have revolutionized the traceability of social phenomena. Social dynamics now leave numerical footprints, which can be analyzed to better understand collective behaviors. The development of large online social networks (like Facebook, Twitter and more generally mobile communications) and connected physical structures (like transportation networks and geolocalised social platforms) resulted in the emergence of large longitudinal datasets. These new datasets bring the opportunity to develop new methods to analyze temporal dynamics in and of these systems. Nowadays, the plurality of data available requires to adapt and combine a plurality of existing methods in order to enlarge the global vision that one has on such complex systems. The purpose of this thesis is to explore the dynamics of social systems using three sets of tools: network science, statistical physics modeling and machine learning. This thesis starts by giving general definitions and some historical context on the methods mentioned above. After that, we show the complex dynamics induced by introducing an infinitesimal quantity of new agents to a Schelling-like model and discuss the limitations of statistical model simulation. The third chapter shows the added value of using longitudinal data. We study the behavior evolution of bike sharing system users and analyze the results of an unsupervised machine learning model aiming to classify users based on their profiles. The fourth chapter explores the differences between global and local methods for temporal community detection using scientometric networks. The last chapter merges complex network analysis and supervised machine learning in order to describe and predict the impact of new businesses on already established ones. We explore the temporal evolution of this impact and show the benefit of combining networks topology measures with machine learning algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Costa, Fausto Guzzo da. "Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13112017-105506/.

Повний текст джерела
Анотація:
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts.
Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hefke, Lena [Verfasser], Ewgenij [Gutachter] Proschak, and Stefan [Gutachter] Knapp. "Using fingerprints and machine learning tools for the prediction of novel dual active compounds for leukotriene A4 hydrolase and soluble epoxide hydrolase / Lena Hefke ; Gutachter: Ewgenij Proschak, Stefan Knapp." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2020. http://d-nb.info/122685320X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Abokersh, Mohamed. "Decision Making Tools for Sustainable Transition Toward Low Carbon Energy Technologies in the Residential Sector." Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/671958.

Повний текст джерела
Анотація:
Aliniant-se amb l’ambiciós paquet energètic i climàtic de la UE 2030 per reduir les emissions d’efecte hivernacle i substituir les fonts de calor convencionals mitjançant la presència de participacions d’energia renovable per aconseguir una comunitat d’energia nul·la, les parts interessades del sector residencial s’enfronten a diversos aspectes tècnics, econòmics i ambientals. qüestions per assolir els objectius de la UE en un futur proper. Aquesta tesi se centra en dues transformacions estructurals claus necessàries per a una transició sostenible cap a la producció d’energia neta: el problema de les tecnologies d’energia amb baix carboni que representen els sistemes solars de calefacció urbana juntament amb l’emmagatzematge estacional d’energia i la seva aplicació per aconseguir edificis d’energia gairebé nul·la. L’abordatge d’aquests reptes s’inicia mitjançant l’ús del disseny i l’optimització de sistemes d’energia neta incorporats a l’aprenentatge automàtic i l’anàlisi de dades per desenvolupar eines d’enginyeria de processos assistits per ordinador. Aquestes eines ajudarien a abordar els reptes de les parts interessades, contribuint així a la transició cap a un futur més sostenible.
Alineándose con el ambicioso paquete de energía y clima de la UE 2030 para reducir las emisiones de efecto invernadero y reemplazar las fuentes de calor convencionales a través de la presencia de energía renovable para lograr una comunidad de energía neta cero, las partes interesadas en el sector residencial se enfrentan a varios problemas técnicos, económicos y ambientales. cuestiones para cumplir los objetivos de la UE en un futuro próximo. Esta tesis se centra en dos transformaciones estructurales clave necesarias para la transición sostenible hacia la producción de energía limpia: el problema de las tecnologías energéticas bajas en carbono que representan los sistemas de calefacción de distrito solar junto con el almacenamiento de energía estacional, y su aplicación para lograr edificios de energía casi nula. El abordaje de estos desafíos se inicia mediante el uso del diseño y la optimización de sistemas de energía limpia incorporados con el aprendizaje automático y el análisis de datos para desarrollar herramientas de ingeniería de procesos asistida por computadora. Estas herramientas ayudarían a abordar los desafíos de las partes interesadas, contribuyendo así a la transición hacia un futuro más sostenible.
Aligning with the ambitious EU 2030 climate and energy package for cutting the greenhouse emissions and replacing conventional heat sources through the presence of renewable energy share to achieve net-zero-energy community, the stakeholders at residential sector are facing several technical, economic, and environmental issues to meet the EU targets in the near future. This thesis is focusing on two key structural transformations needed for sustainable transition towards clean energy production: the low carbon energy technologies problem represented by the solar district heating systems coupled with seasonal energy storage, and its application to achieve Nearly Zero Energy Buildings. The Tackling for these challenges is instigated through using design and optimization of clean energy systems incorporated with machine learning and data analysis to develop Computer-Aided Process Engineering tools. These tools would help in addressing the stakeholder’s challenges, thus contributing to the transition towards a more sustainable future.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

ILARDI, DAVIDE. "Data-driven solutions to enhance planning, operation and design tools in Industry 4.0 context." Doctoral thesis, Università degli studi di Genova, 2023. https://hdl.handle.net/11567/1104513.

Повний текст джерела
Анотація:
This thesis proposes three different data-driven solutions to be combined to state-of-the-art solvers and tools in order to primarily enhance their computational performances. The problem of efficiently designing the open sea floating platforms on which wind turbines can be mount on will be tackled, as well as the tuning of a data-driven engine's monitoring tool for maritime transportation. Finally, the activities of SAT and ASP solvers will be thoroughly studied and a deep learning architecture will be proposed to enhance the heuristics-based solving approach adopted by such software. The covered domains are different and the same is true for their respective targets. Nonetheless, the proposed Artificial Intelligence and Machine Learning algorithms are shared as well as the overall picture: promote Industrial AI and meet the constraints imposed by Industry 4.0 vision. The lesser presence of human-in-the-loop, a data-driven approach to discover causalities otherwise ignored, a special attention to the environmental impact of industries' emissions, a real and efficient exploitation of the Big Data available today are just a subset of the latter. Hence, from a broader perspective, the experiments carried out within this thesis are driven towards the aforementioned targets and the resulting outcomes are satisfactory enough to potentially convince the research community and industrialists that they are not just "visions" but they can be actually put into practice. However, it is still an introduction to the topic and the developed models are at what can be defined a "pilot" stage. Nonetheless, the results are promising and they pave the way towards further improvements and the consolidation of the dictates of Industry 4.0.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

BELCORE, ELENA. "Generation of a Land Cover Atlas of environmental critic zones using unconventional tools." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2907028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Thun, Julia, and Rebin Kadouri. "Automating debugging through data mining." Thesis, KTH, Data- och elektroteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203244.

Повний текст джерела
Анотація:
Contemporary technological systems generate massive quantities of log messages. These messages can be stored, searched and visualized efficiently using log management and analysis tools. The analysis of log messages offer insights into system behavior such as performance, server status and execution faults in web applications. iStone AB wants to explore the possibility to automate their debugging process. Since iStone does most parts of their debugging manually, it takes time to find errors within the system. The aim was therefore to find different solutions to reduce the time it takes to debug. An analysis of log messages within access – and console logs were made, so that the most appropriate data mining techniques for iStone’s system would be chosen. Data mining algorithms and log management and analysis tools were compared. The result of the comparisons showed that the ELK Stack as well as a mixture between Eclat and a hybrid algorithm (Eclat and Apriori) were the most appropriate choices. To demonstrate their feasibility, the ELK Stack and Eclat were implemented. The produced results show that data mining and the use of a platform for log analysis can facilitate and reduce the time it takes to debug.
Dagens system genererar stora mängder av loggmeddelanden. Dessa meddelanden kan effektivt lagras, sökas och visualiseras genom att använda sig av logghanteringsverktyg. Analys av loggmeddelanden ger insikt i systemets beteende såsom prestanda, serverstatus och exekveringsfel som kan uppkomma i webbapplikationer. iStone AB vill undersöka möjligheten att automatisera felsökning. Eftersom iStone till mestadels utför deras felsökning manuellt så tar det tid att hitta fel inom systemet. Syftet var att därför att finna olika lösningar som reducerar tiden det tar att felsöka. En analys av loggmeddelanden inom access – och konsolloggar utfördes för att välja de mest lämpade data mining tekniker för iStone’s system. Data mining algoritmer och logghanteringsverktyg jämfördes. Resultatet av jämförelserna visade att ELK Stacken samt en blandning av Eclat och en hybrid algoritm (Eclat och Apriori) var de lämpligaste valen. För att visa att så är fallet så implementerades ELK Stacken och Eclat. De framställda resultaten visar att data mining och användning av en plattform för logganalys kan underlätta och minska den tid det tar för att felsöka.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

GALLO, Giuseppe. "Architettura e second digital turn, l’evoluzione degli strumenti informatici e il progetto." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/514731.

Повний текст джерела
Анотація:
La condizione digitale che ha gradualmente ibridato le nostre esistenze, trasformando atomi in bit, si è oggi cementificata sulla nostra società, arricchendone la postmodernità e determinando una nuova liquidità acuitasi con l’avvento di internet. Un momento storico segnato da una nuova maturità del digitale, evidente nel nostro diverso rapporto con i dati, e nella diffusione di metodi di machine learning avanzato, che promettono una nuova capacità di comprensione della complessità contemporanea e nel frattempo contribuiscono alla propagazione dell’apparato tecnico sul mondo. Questi cambiamenti, tanto profondi da toccare la nostra cultura, stanno modificando il nostro modo di interpretare e istituire lo spazio, e quindi di abitarlo: condizioni che hanno sicuramente delle ripercussioni sul progetto di architettura nella sua qualità di attività umana rivolta all’uomo. L’incremento di complessità che ha toccato la nostra disciplina con la postmodernità ha nel frattempo trovato nuovo sostegno nella decostruzione Derridiana, in un momento storico segnato da una grande enfasi sulle opportunità degli strumenti digitali, che abbiamo accolto all’interno della nostra disciplina dapprima esclusivamente come mezzi di rappresentazione e che hanno poi determinato l’emergere di nuovi approcci basati sulle potenzialità inclusive di continuità e variazione. Nessuno tra i protagonisti della prima svolta digitale immaginava probabilmente, gli effetti che la cultura digitale produce oggi sul progetto di architettura, forte di quasi trent’anni di sperimentazioni e cambiamenti, tanto metodologici e formali quanto organizzativi e strumentali, a partire dalla ascesa del BIM sino alle nuove possibilità algoritmiche ben rappresentate dai linguaggi di programmazione visuale e dalle simulazioni numeriche. Strumenti su cui si è concentrato lo slancio verso il digitale, che intanto in architettura ha vissuto una seconda svolta, identificata da Carpo negli approcci progettuali oggi possibili grazie a una nuova disponibilità di dati. Una condizione che inevitabilmente tocca tanto la scienza quanto l’architettura, e che non è tuttavia sufficiente a descrivere una contemporaneità in cui la tecnica dispiega le ali sull’architettura, incidendo il significato del nostro ruolo all’interno della società. A partire da queste ramificate considerazioni, consapevole della complessità con cui dobbiamo dialogare nel tentativo di ricostruire una visione il più possibile neutrale, storica e organica della fase che l’architettura sta vivendo, è necessario, a mio avviso, un approccio olistico: inclusivo, capace tanto di estendersi fino ad acquisire una prospettiva filosofica, così come di scendere verso il dettaglio tecnico, operativo, metodologico, strumentale e relazionale. Un proposito che cerco di mantenere vivo all’interno di tutto il mio lavoro di tesi, condensazione di tre anni di ricerca, che nelle sue diverse fasi guarda alle mutazioni che la tecnica digitale sta producendo nella società e quindi nel progetto di architettura. Il mio percorso è arricchito da dieci interviste raccolte con importanti protagonisti dell’architettura contemporanea, che ringrazio sin da ora per la loro grande disponibilità. Queste testimonianze mi hanno permesso di toccare con mano le complessità della progettazione contemporanea e rappresentano un polo di questa tesi, ugualmente volta a fornire un’interpretazione storica delle sfide poste in essere dalla contemporaneità e quindi all’identificazione delle responsabilità che dobbiamo assumerci per mantenere l’uomo al centro del nostro fare.
The digital condition that has gradually hybridized our lives, transforming atoms into bits, has now cemented itself in our society, enriching post-modernity and determining a new form of liquidity that has sharpened with the advent of the internet. It is a historical moment marked by a new digital maturity, evident in our diverse relationship to data and in the spread of advanced machine learning methods, which both promise a new understanding of contemporary complexity as well as contribute to the propagation of the technical apparatus throughout the world. These changes, so profound as to affect our culture, are changing our way of perceiving space, and therefore of inhabiting it: conditions that undoubtedly have repercussions on architectural design in its capacity as a human activity geared towards human beings. The increased complexity that has touched our discipline with Postmodernism has meanwhile found new support in Derridian deconstruction, in a historical moment marked by great emphasis on the opportunities that digital tools offer. These are means we first welcomed into our discipline exclusively as tools for representation, and ones that then themselves determined the emergence of new approaches based on the inclusive potential of continuity and variation. None of the protagonists of the first digital turn could probably have imagined the effects that digital culture would now be having on architectural design. A digital culture that has become increasingly stronger due to almost thirty years of both methodological and formal experimentation, as well as to organizational and instrumental changes, starting with the rise of BIM to new algorithmic possibilities represented by visual programming languages and numerical simulations. These have been the primary tools of concentration in the push towards digital, a digital which today has reached a second turn in the field of architecture, identified by Carpo in new design approaches that are now possible thanks to the larger availability of data. A condition that inevitably affects both science and architectural design, but which, nevertheless, fails to fully share a contemporaneity where technology spreads its wings as far as architecture is concerned, thus affecting the meaning of our role within society. With these multifaceted considerations as a starting point, and fully aware of how complex the dialogue we must engage in in order to reconstruct a neutral, historical, and organic as possible vision of the phase that architecture is experiencing, it is my opinion a holistic approach must be established by us. One that is both inclusive and capable of expanding to the point of acquiring a philosophical perspective, as well as being able to attend to areas that cover technical, operational, methodological, instrumental, and relational details. This objective is one I have striven to keep alive throughout the three years of my doctoral research, which in its various phases looks at the mutations that digital technology is producing in society and therefore in architectural design. My research is enriched by the inclusion of ten interviews with prominent protagonists of contemporary architecture, for whose time and availability I am grateful. These testimonials allowed me to see the complexities of contemporary design up close and personal, and they represent a central part of this thesis, which equally aims to provide a historical interpretation of the challenges posed by contemporaneity and to identify the responsibilities that we must uphold for human beings to remain at the centre of our work.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wusteman, Judith. "EBKAT : an explanation-based knowledge acquisition tool." Thesis, University of Exeter, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Cooper, Clayton Alan. "Milling Tool Condition Monitoring Using Acoustic Signals and Machine Learning." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1575539872711423.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

BUBACK, SILVANO NOGUEIRA. "USING MACHINE LEARNING TO BUILD A TOOL THAT HELPS COMMENTS MODERATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19232@1.

Повний текст джерела
Анотація:
Uma das mudanças trazidas pela Web 2.0 é a maior participação dos usuários na produção do conteúdo, através de opiniões em redes sociais ou comentários nos próprios sites de produtos e serviços. Estes comentários são muito valiosos para seus sites pois fornecem feedback e incentivam a participação e divulgação do conteúdo. Porém excessos podem ocorrer através de comentários com palavrões indesejados ou spam. Enquanto para alguns sites a própria moderação da comunidade é suficiente, para outros as mensagens indesejadas podem comprometer o serviço. Para auxiliar na moderação dos comentários foi construída uma ferramenta que utiliza técnicas de aprendizado de máquina para auxiliar o moderador. Para testar os resultados, dois corpora de comentários produzidos na Globo.com foram utilizados, o primeiro com 657.405 comentários postados diretamente no site, e outro com 451.209 mensagens capturadas do Twitter. Nossos experimentos mostraram que o melhor resultado é obtido quando se separa o aprendizado dos comentários de acordo com o tema sobre o qual está sendo comentado.
One of the main changes brought by Web 2.0 is the increase of user participation in content generation mainly in social networks and comments in news and service sites. These comments are valuable to the sites because they bring feedback and motivate other people to participate and to spread the content. On the other hand these comments also bring some kind of abuse as bad words and spam. While for some sites their own community moderation is enough, for others this impropriate content may compromise its content. In order to help theses sites, a tool that uses machine learning techniques was built to mediate comments. As a test to compare results, two datasets captured from Globo.com were used: the first one with 657.405 comments posted through its site and the second with 451.209 messages captured from Twitter. Our experiments show that best result is achieved when comment learning is done according to the subject that is being commented.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Hegemann, Lena. "Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281339.

Повний текст джерела
Анотація:
Advancements in creative artificial intelligence (AI) are leading to systems that can actively work together with designers in tasks such as ideation, i.e. the creation, development, and communication of ideas. In human group work, making suggestions and explaining the reasoning behind them as well as comprehending other group member’s explanations aids reflection, trust, alignment of goals and inspiration through diverse perspectives. Despite their ability to inspire through independent suggestions, state-of-the-art creative AI systems do not leverage these advantages of group work due to missing or one-sided explanations. For other use cases, AI systems that explain their reasoning are already gathering wide research interest. However, there is a knowledge gap on the effects of explanations on creativity. Furthermore, it is unknown whether a user can benefit from also explaining their contributions to an AI system. This thesis investigates whether reciprocal explanations, a novel technique which combines explanations from and to an AI system, improve the designers’ and AI’s joint exploration of ideas. I integrated reciprocal explanations into an AI aided tool for mood board design, a common method for ideation. In our implementation, the AI system uses text to explain which features of its suggestions match or complement the current mood board. Occasionally, it asks for user explanations providing several options for answers that it reacts to by aligning its strategy. A study was conducted with 16 professional designers who used the tool to create mood boards followed by presentations and semi-structured interviews. The study emphasized a need for explanations that make the principles of the system transparent and showed that alignment of goals motivated participants to provide explanations to the system. Also, enabling users to explain their contributions to the AI system facilitated reflection on their own reasons.
Framsteg inom kreativ artificiell intelligens (AI) har lett till system som aktivt kan samarbeta med designers under idéutformningsprocessen, dvs vid skapande, utveckling och kommunikation av idéer. I grupparbete är det viktigt att kunna göra förslag och förklara resonemanget bakom dem, samt förstå de andra gruppmedlemmarnas resonemang. Detta ökar reflektionsförmågan och förtroende hos medlemmarna, samt underlättar sammanjämkning av mål och ger inspiration genom att höra olika perspektiv. Trots att system, baserade på kreativ artificiell intelligens, har förmågan att inspirera genom sina oberoende förslag, utnyttjar de allra senaste kreativa AI-systemen inte dessa fördelar för att facilitera grupparbete. Detta är på grund av AI-systemens bristfälliga förmåga att resonera över sina förslag. Resonemangen är ofta ensidiga, eller saknas totalt. AI-system som kan förklara sina resonemang är redan ett stort forskningsintresse inom många användningsområden. Dock finns det brist på kunskap om AI-systemens påverkan på den kreativa processen. Dessutom är det okänt om en användare verkligen kan dra nytta av möjligheten att kunna förklara sina designbeslut till ett AI-system. Denna avhandling undersöker om ömsesidiga förklaringar, en ny teknik som kombinerar förklaringar från och till ett AI system, kan förbättra designerns och AI:s samarbete under utforskningen av idéer. Jag integrerade ömsesidiga förklaringar i ett AI-hjälpmedel som underlättar skapandet av stämningsplank (eng. mood board), som är en vanlig metod för konceptutveckling. I vår implementering använder AI-systemet textbeskrivningar för att förklara vilka delar av dess förslag som matchar eller kompletterar det nuvarande stämningsplanket. Ibland ber den användaren ge förklaringar, så den kan anpassa sin förslagsstrategi efter användarens önskemål. Vi genomförde en studie med 16 professionella designers som använde verktyget för att skapa stämningsplank. Feedback samlades genom presentationer och semistrukturerade intervjuer. Studien betonade behovet av förklaringar och resonemang som gör principerna bakom AI-systemet transparenta för användaren. Höjd sammanjämkning mellan användarens och systemets mål motiverade deltagarna att ge förklaringar till systemet. Genom att göra det möjligt för användare att förklara sina designbeslut för AI-systemet, förbättrades också användarens reflektionsförmåga över sina val.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Binsaeid, Sultan Hassan. "Multisensor Fusion for Intelligent Tool Condition Monitoring (TCM) in End Milling Through Pattern Classification and Multiclass Machine Learning." Scholarly Repository, 2007. http://scholarlyrepository.miami.edu/oa_dissertations/7.

Повний текст джерела
Анотація:
In a fully automated manufacturing environment, instant detection of condition state of the cutting tool is essential to the improvement of productivity and cost effectiveness. In this paper, a tool condition monitoring system (TCM) via machine learning (ML) and machine ensemble (ME) approach was developed to investigate the effectiveness of multisensor fusion when machining 4340 steel with multi-layer coated and multi-flute carbide end mill cutter. Feature- and decision-level information fusion models utilizing assorted combinations of sensors were studied against selected ML algorithms and their majority vote ensemble to classify gradual and transient tool abnormalities. The criterion for selecting the best model does not only depend on classification accuracy but also on the simplicity of the implemented system where the number of features and sensors is kept to a minimum to enhance the efficiency of the online acquisition system. In this study, 135 different features were extracted from sensory signals of force, vibration, acoustic emission and spindle power in the time and frequency domain by using data acquisition and signal processing modules. Then, these features along with machining parameters were evaluated for significance by using different feature reduction techniques. Specifically, two feature extraction methods were investigated: independent component analysis (ICA), and principal component analysis (PCA) and two feature selection methods were studied, chi square and correlation-based feature selection (CFS). For various multi-sensor fusion models, an optimal feature subset is computed. Finally, ML algorithms using support vector machine (SVM), multilayer perceptron neural networks (MLP), radial basis function neural network (RBF) and their majority voting ensemble were studied for selected features to classify not only flank wear but also breakage and chipping. In this research, it has been found that utilizing the multisensor feature fusion technique under majority vote ensemble gives the highest classification performance. In addition, SVM outperformed other ML algorithms while CFS feature selection method surpassed other reduction techniques in improving classification performance and producing optimal feature sets for different models.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Gert, Oskar. "Using Machine Learning as a Tool to Improve Train Wheel Overhaul Efficiency." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171121.

Повний текст джерела
Анотація:
This thesis develops a method for using machine learning in a industrial pro-cess. The implementation of this machine learning model aimed to reduce costsand increase efficiency of train wheel overhaul in partnership with the AustrianFederal Railroads, Oebb. Different machine learning models as well as categoryencodings were tested to find which performed best on the data set. In addition,differently sized training sets were used to determine whether size of the trainingset affected the results. The implementation shows that Oebb can save moneyand increase efficiency of train wheel overhaul by using machine learning andthat continuous training of prediction models is necessary because of variationsin the data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

EDIN, ANTON, and MARIAM QORBANZADA. "E-Learning as a tool to support the integration of machine learning in product development processes." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279757.

Повний текст джерела
Анотація:
This research is concerned with possible applications of e-Learning as an alternative to onsite training sessions when supporting the integration of machine learning into the product development process. Mainly, its aim was to study if e-learning approaches are viable for laying a foundation for making machine learning more accessible in integrated product development processes. This topic presents itself as interesting as advances in the general understanding of it enable better remote learning as well as general scalability of knowledge transfer. To achieve this two groups of employees belonging to the same corporate group but working in two very different geographical regions where asked to participate in a set of training session created by the authors. One group received the content via in-person workshops whereas the other was invited to a series of remote tele-conferences. After both groups had participated in the sessions, some member where asked to be interviewed. Additionally. The authors also arranged for interviews with some of the participants’ direct managers and project leaders to compare the participants’ responses with some stakeholders not participating in the workshops. A combination of a qualitative theoretical analysis together with the interview responses was used as the base for the presented results. Respondents indicated that they preferred the onsite training approach, however, further coding of interview responses showed that there was little difference in the participants ability to obtain knowledge. Interestingly, while results point towards e-learning as a technology with many benefits, it seems as though other shortcomings, mainly concerning the human interaction between learners, may hold back its full potential and thereby hinder its integration into product development processes.
Detta forskningsarbete fokuserar på tillämpningar av elektroniska utlärningsmetoder som alternativ till lokala lektioner vid integrering av maskininlärning i produktutvecklingsprocessen. Framförallt är syftet att undersöka om det går att använda elektroniska utlärningsmetoder för att göra maskininlärning mer tillgänglig i produktutvecklingsprocessen. Detta ämne presenterar sig som intressant då en djupare förståelse kring detta banar väg för att effektivisera lärande på distans samt skalbarheten av kunskapsspridning. För att uppnå detta bads två grupper av anställda hos samma företagsgrupp, men tillhörande olika geografiska områden att ta del i ett upplägg av lektioner som författarna hade tagit fram. En grupp fick ta del av materialet genom seminarier, medan den andra bjöds in till att delta i en serie tele-lektioner. När båda deltagargrupper hade genomgått lektionerna fick några deltagare förfrågningar om att bli intervjuade. Några av deltagarnas direkta chefer och projektledare intervjuades även för att kunna jämföra deltagarnas åsikter med icke-deltagande intressenter. En kombination av en kvalitativ teoretisk analys tillsammans med svaren från intervjuerna användes som bas för de presenterade resultaten. Svarande indikerade att de föredrog träningarna som hölls på plats, men vidare kodning av intervjusvaren visade på undervisningsmetoden inte hade större påverkningar på deltagarnas förmåga att ta till sig materialet. Trots att resultatet pekar på att elektroniskt lärande är en teknik med många fördelar verkar det som att brister i teknikens förmåga att integrera mänsklig interaktion hindrar den från att nå sitt fulla potential och därigenom även hindrar dess integration i produktutvecklingsprocessen.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Bheemireddy, Shruthi. "MACHINE LEARNING-BASED ONTOLOGY MAPPING TOOL TO ENABLE INTEROPERABILITY IN COASTAL SENSOR NETWORKS." MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09222009-200303/.

Повний текст джерела
Анотація:
In todays world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Hashmi, Muhammad Ali S. M. Massachusetts Institute of Technology. "Said-Huntington Discourse Analyzer : a machine-learning tool for classifying and analyzing discourse." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98543.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-74).
Critical discourse analysis (CDA) aims to understand the link "between language and the social" (Mautner and Baker, 2009), and attempts to demystify social construction and power relations (Gramsci, 1999). On the other hand, corpus linguistics deals with principles and practice of understanding the language produced within large amounts of textual data (Oostdijk, 1991). In my thesis, I have aimed to combine, using machine learning, the CDA approach with corpus linguistics with the intention of deconstructing dominant discourses that create, maintain and deepen fault lines between social groups and classes. As an instance of this technological framework, I have developed a tool for understanding and defining the discourse on Islam in the global mainstream media sources. My hypothesis is that the media coverage in several mainstream news sources tends to contextualize Muslims largely as a group embroiled in conflict at a disproportionately large level. My hypothesis is based on the assumption that discourse on Islam in mainstream global media tends to lean toward the dangerous "clash of civilizations" frame. To test this hypothesis, I have developed a prototype tool "Said-Huntington Discourse Analyzer" that machine classifies news articles on a normative scale -- a scale that measures "clash of civilization" polarization in an article on the basis of conflict. The tool also extracts semantically meaningful conversations for a media source using Latent Dirichlet Allocation (LDA) topic modeling, allowing the users to discover frames of conversations on the basis of Said-Huntington index classification. I evaluated the classifier on human-classified articles and found that the accuracy of the classifier was very high (99.03%). Generally, text analysis tools uncover patterns and trends in the data without delineating the 'ideology' that permeates the text. The machine learning tool presented here classifies media discourse on Islam in terms of conflict and non-conflict, and attempts to put light on the 'ideology' that permeates the text. In addition, the tool provides textual analysis of news articles based on the CDA methodologies.
by Muhammad Ali Hashmi.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

McCoy, Mason Eugene. "A Twitter-Based Prediction Tool for Digital Currency." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2302.

Повний текст джерела
Анотація:
Digital currencies (cryptocurrencies) are rapidly becoming commonplace in the global market. Trading is performed similarly to the stock market or commodities, but stock market prediction algorithms are not necessarily well-suited for predicting digital currency prices. In this work, we analyzed tweets with both an existing sentiment analysis package and a manually tailored "objective analysis," resulting in one impact value for each analysis per 15-minute period. We then used evolutionary techniques to select the most appropriate training method and the best subset of the generated features to include, as well as other parameters. This resulted in implementation of predictors which yielded much more profit in four-week simulations than simply holding a digital currency for the same time period--the results ranged from 28% to 122% profit. Unlike stock exchanges, which shut down for several hours or days at a time, digital currency prediction and trading seems to be of a more consistent and predictable nature.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jakob, Persson. "How to annotate in video for training machine learning with a good workflow." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187078.

Повний текст джерела
Анотація:
Artificial intelligence and machine learning is used in a lot of different areas, one of those areas is image recognition. In the production of a TV-show or film, image recognition can be used to help the editors to find specific objects, scenes, or people in the video content, which speeds up the production. But image recognition is not working perfect all the time and can not be used in the production of a TV-show or film as it is intended to. Therefore the image recognition algorithms needs to be trained on large datasets to become better. But to create these datasets takes time and tools that can let users create specific datasets and retrain algorithms to become better is needed. The aim of this master thesis was to investigate if it was possible to create a tool that can annotate objects and people in video content and using the data as training sets, and a tool that can retrain the output of an image recognition to make the image recognition become better. It was also important that the tools have a good workflow for the users. The study consisted of a theoretical study to gain more knowledge about annotation, and how to make a good UX-design with a good workflow. Interviews were also held to get more knowledge of what the requirements of the product was. It resulted in a user scenario and a workflow that was used together with the knowledge from the theoretical study to create a hi-fi prototype by using an iterative process with usability testing. This resulted in a final hi-fi prototype with a good design and a good workflow for the users, where it is possible to annotate objects and people with a bounding box, and where it is possible to retrain an image recognition program that has been used on video content.
Artificiell intelligens och maskininlärning används inom många olika områden, ett av dessa områden är bildigenkänning. Vid produktionen av ett TV-program eller av en film kan bildigenkänning användas för att hjälpa redigerarna att hitta specifika objekt, scener eller personer i videoinnehållet, vilket påskyndar produktionen. Men bildigenkänningsprogram fungerar inte alltid helt perfekt och kan inte användas i produktionen av ett TV-program eller film som det är tänkt att användas i det sammanhanget. För att förbättra bildigenkänningsprogram så behöver dess algoritm tränas på stora datasets av bilder och labels. Men att skapa dessa datasets tar tid och det behövs program som kan skapa datasets och återträna algoritmer för bildigenkänning så att de fungerar bättre. Syftet med detta examensarbete var att undersöka om det var möjligt att skapa ett verktyg som kan markera(annotera) objekt och personer i video och använda datat som träningsdata för algoritmer. Men även att skapa ett verktyg som kan återträna algoritmer för bildigenkänning så att de blir bättre utifrån datat man får från ett bildigenkänningprogram. Det var också viktigt att dessa verktyg hade ett bra arbetsflöde för användarna. Studien bestod av en teoretisk studie för att få mer kunskap om annoteringar i video och hur man skapar bra UX-design med ett bra arbetsflöde. Intervjuer hölls också för att få mer kunskap om kraven på produkten och vilka som skulle använda den. Det resulterade i ett användarscenario och ett arbetsflöde som användes tillsammans med kunskapen från den teoretiska studien för att skapa en hi-fi prototyp, där en iterativ process med användbarhetstestning användes. Detta resulterade i en slutlig hi-fi prototyp med bra design och ett bra arbetsflöde för användarna där det är möjligt att markera(annotera) objekt och personer med en bounding box och där det är möjligt att återträna algoritmer för bildigenkänning som har körts på video.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Massaccesi, Luciano. "Machine Learning Software for Automated Satellite Telemetry Monitoring." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20502/.

Повний текст джерела
Анотація:
During the lifetime of a satellite malfunctions may occur. Unexpected behaviour are monitored using sensors all over the satellite. The telemetry values are then sent to Earth and analysed seeking for anomalies. These anomalies could be detected by humans, but this is considerably expensive. To lower the costs, machine learning techniques can be applied. In this research many diferent machine learning techniques are tested and compared using satellite telemetry data provided by OHB System AG. The fact that the anomalies are collective, together with some data properties, is exploited to improve the performances of the machine learning algorithms. Since the data comes from a real spacecraft, it presents some defects. The data covers in fact a small time-lapse and does not present critical anomalies due to the spacecraft healthiness. Some steps are then taken to improve the evaluation of the algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Spies, Lucas Daniel. "Machine-Learning based tool to predict Tire Noise using both Tire and Pavement Parameters." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91407.

Повний текст джерела
Анотація:
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, new machine learning algorithms to model TPIN have been implemented. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research studied the correct configuration of such tools. More specifically, Artificial Neural Network (ANN) configurations were studied. Their implementation was based on the problem requirements (acoustic sound pressure prediction). Moreover, a customized neuron configuration showed improvements on the ANN TPIN prediction capabilities. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement profile when predicting TPIN. Finally, the new ANN configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed. Tire noise narrowband spectra for a frequency range of 400-1600 Hz is obtained as a result.
Master of Science
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, machine learning algorithms, based on the human brain structure, have been implemented to model TPIN. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research focused on the study of the correct configuration of such machine learning algorithms applied to the very specific task of TPIN prediction. Moreover, a customized configuration showed improvements on the TPIN prediction capabilities of these algorithms. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement roughness when predicting TPIN. Finally, the new machine learning algorithm configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete computational tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії