Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: MACHINE LEARNING TOOL.

Дисертації з теми "MACHINE LEARNING TOOL"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "MACHINE LEARNING TOOL".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wusteman, Judith. "EBKAT : an explanation-based knowledge acquisition tool." Thesis, University of Exeter, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cooper, Clayton Alan. "Milling Tool Condition Monitoring Using Acoustic Signals and Machine Learning." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1575539872711423.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

BUBACK, SILVANO NOGUEIRA. "USING MACHINE LEARNING TO BUILD A TOOL THAT HELPS COMMENTS MODERATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19232@1.

Повний текст джерела
Анотація:
Uma das mudanças trazidas pela Web 2.0 é a maior participação dos usuários na produção do conteúdo, através de opiniões em redes sociais ou comentários nos próprios sites de produtos e serviços. Estes comentários são muito valiosos para seus sites pois fornecem feedback e incentivam a participação e divulgação do conteúdo. Porém excessos podem ocorrer através de comentários com palavrões indesejados ou spam. Enquanto para alguns sites a própria moderação da comunidade é suficiente, para outros as mensagens indesejadas podem comprometer o serviço. Para auxiliar na moderação dos comentários foi construída uma ferramenta que utiliza técnicas de aprendizado de máquina para auxiliar o moderador. Para testar os resultados, dois corpora de comentários produzidos na Globo.com foram utilizados, o primeiro com 657.405 comentários postados diretamente no site, e outro com 451.209 mensagens capturadas do Twitter. Nossos experimentos mostraram que o melhor resultado é obtido quando se separa o aprendizado dos comentários de acordo com o tema sobre o qual está sendo comentado.
One of the main changes brought by Web 2.0 is the increase of user participation in content generation mainly in social networks and comments in news and service sites. These comments are valuable to the sites because they bring feedback and motivate other people to participate and to spread the content. On the other hand these comments also bring some kind of abuse as bad words and spam. While for some sites their own community moderation is enough, for others this impropriate content may compromise its content. In order to help theses sites, a tool that uses machine learning techniques was built to mediate comments. As a test to compare results, two datasets captured from Globo.com were used: the first one with 657.405 comments posted through its site and the second with 451.209 messages captured from Twitter. Our experiments show that best result is achieved when comment learning is done according to the subject that is being commented.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Binsaeid, Sultan Hassan. "Multisensor Fusion for Intelligent Tool Condition Monitoring (TCM) in End Milling Through Pattern Classification and Multiclass Machine Learning." Scholarly Repository, 2007. http://scholarlyrepository.miami.edu/oa_dissertations/7.

Повний текст джерела
Анотація:
In a fully automated manufacturing environment, instant detection of condition state of the cutting tool is essential to the improvement of productivity and cost effectiveness. In this paper, a tool condition monitoring system (TCM) via machine learning (ML) and machine ensemble (ME) approach was developed to investigate the effectiveness of multisensor fusion when machining 4340 steel with multi-layer coated and multi-flute carbide end mill cutter. Feature- and decision-level information fusion models utilizing assorted combinations of sensors were studied against selected ML algorithms and their majority vote ensemble to classify gradual and transient tool abnormalities. The criterion for selecting the best model does not only depend on classification accuracy but also on the simplicity of the implemented system where the number of features and sensors is kept to a minimum to enhance the efficiency of the online acquisition system. In this study, 135 different features were extracted from sensory signals of force, vibration, acoustic emission and spindle power in the time and frequency domain by using data acquisition and signal processing modules. Then, these features along with machining parameters were evaluated for significance by using different feature reduction techniques. Specifically, two feature extraction methods were investigated: independent component analysis (ICA), and principal component analysis (PCA) and two feature selection methods were studied, chi square and correlation-based feature selection (CFS). For various multi-sensor fusion models, an optimal feature subset is computed. Finally, ML algorithms using support vector machine (SVM), multilayer perceptron neural networks (MLP), radial basis function neural network (RBF) and their majority voting ensemble were studied for selected features to classify not only flank wear but also breakage and chipping. In this research, it has been found that utilizing the multisensor feature fusion technique under majority vote ensemble gives the highest classification performance. In addition, SVM outperformed other ML algorithms while CFS feature selection method surpassed other reduction techniques in improving classification performance and producing optimal feature sets for different models.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gert, Oskar. "Using Machine Learning as a Tool to Improve Train Wheel Overhaul Efficiency." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171121.

Повний текст джерела
Анотація:
This thesis develops a method for using machine learning in a industrial pro-cess. The implementation of this machine learning model aimed to reduce costsand increase efficiency of train wheel overhaul in partnership with the AustrianFederal Railroads, Oebb. Different machine learning models as well as categoryencodings were tested to find which performed best on the data set. In addition,differently sized training sets were used to determine whether size of the trainingset affected the results. The implementation shows that Oebb can save moneyand increase efficiency of train wheel overhaul by using machine learning andthat continuous training of prediction models is necessary because of variationsin the data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

EDIN, ANTON, and MARIAM QORBANZADA. "E-Learning as a tool to support the integration of machine learning in product development processes." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279757.

Повний текст джерела
Анотація:
This research is concerned with possible applications of e-Learning as an alternative to onsite training sessions when supporting the integration of machine learning into the product development process. Mainly, its aim was to study if e-learning approaches are viable for laying a foundation for making machine learning more accessible in integrated product development processes. This topic presents itself as interesting as advances in the general understanding of it enable better remote learning as well as general scalability of knowledge transfer. To achieve this two groups of employees belonging to the same corporate group but working in two very different geographical regions where asked to participate in a set of training session created by the authors. One group received the content via in-person workshops whereas the other was invited to a series of remote tele-conferences. After both groups had participated in the sessions, some member where asked to be interviewed. Additionally. The authors also arranged for interviews with some of the participants’ direct managers and project leaders to compare the participants’ responses with some stakeholders not participating in the workshops. A combination of a qualitative theoretical analysis together with the interview responses was used as the base for the presented results. Respondents indicated that they preferred the onsite training approach, however, further coding of interview responses showed that there was little difference in the participants ability to obtain knowledge. Interestingly, while results point towards e-learning as a technology with many benefits, it seems as though other shortcomings, mainly concerning the human interaction between learners, may hold back its full potential and thereby hinder its integration into product development processes.
Detta forskningsarbete fokuserar på tillämpningar av elektroniska utlärningsmetoder som alternativ till lokala lektioner vid integrering av maskininlärning i produktutvecklingsprocessen. Framförallt är syftet att undersöka om det går att använda elektroniska utlärningsmetoder för att göra maskininlärning mer tillgänglig i produktutvecklingsprocessen. Detta ämne presenterar sig som intressant då en djupare förståelse kring detta banar väg för att effektivisera lärande på distans samt skalbarheten av kunskapsspridning. För att uppnå detta bads två grupper av anställda hos samma företagsgrupp, men tillhörande olika geografiska områden att ta del i ett upplägg av lektioner som författarna hade tagit fram. En grupp fick ta del av materialet genom seminarier, medan den andra bjöds in till att delta i en serie tele-lektioner. När båda deltagargrupper hade genomgått lektionerna fick några deltagare förfrågningar om att bli intervjuade. Några av deltagarnas direkta chefer och projektledare intervjuades även för att kunna jämföra deltagarnas åsikter med icke-deltagande intressenter. En kombination av en kvalitativ teoretisk analys tillsammans med svaren från intervjuerna användes som bas för de presenterade resultaten. Svarande indikerade att de föredrog träningarna som hölls på plats, men vidare kodning av intervjusvaren visade på undervisningsmetoden inte hade större påverkningar på deltagarnas förmåga att ta till sig materialet. Trots att resultatet pekar på att elektroniskt lärande är en teknik med många fördelar verkar det som att brister i teknikens förmåga att integrera mänsklig interaktion hindrar den från att nå sitt fulla potential och därigenom även hindrar dess integration i produktutvecklingsprocessen.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bheemireddy, Shruthi. "MACHINE LEARNING-BASED ONTOLOGY MAPPING TOOL TO ENABLE INTEROPERABILITY IN COASTAL SENSOR NETWORKS." MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09222009-200303/.

Повний текст джерела
Анотація:
In todays world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hashmi, Muhammad Ali S. M. Massachusetts Institute of Technology. "Said-Huntington Discourse Analyzer : a machine-learning tool for classifying and analyzing discourse." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98543.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-74).
Critical discourse analysis (CDA) aims to understand the link "between language and the social" (Mautner and Baker, 2009), and attempts to demystify social construction and power relations (Gramsci, 1999). On the other hand, corpus linguistics deals with principles and practice of understanding the language produced within large amounts of textual data (Oostdijk, 1991). In my thesis, I have aimed to combine, using machine learning, the CDA approach with corpus linguistics with the intention of deconstructing dominant discourses that create, maintain and deepen fault lines between social groups and classes. As an instance of this technological framework, I have developed a tool for understanding and defining the discourse on Islam in the global mainstream media sources. My hypothesis is that the media coverage in several mainstream news sources tends to contextualize Muslims largely as a group embroiled in conflict at a disproportionately large level. My hypothesis is based on the assumption that discourse on Islam in mainstream global media tends to lean toward the dangerous "clash of civilizations" frame. To test this hypothesis, I have developed a prototype tool "Said-Huntington Discourse Analyzer" that machine classifies news articles on a normative scale -- a scale that measures "clash of civilization" polarization in an article on the basis of conflict. The tool also extracts semantically meaningful conversations for a media source using Latent Dirichlet Allocation (LDA) topic modeling, allowing the users to discover frames of conversations on the basis of Said-Huntington index classification. I evaluated the classifier on human-classified articles and found that the accuracy of the classifier was very high (99.03%). Generally, text analysis tools uncover patterns and trends in the data without delineating the 'ideology' that permeates the text. The machine learning tool presented here classifies media discourse on Islam in terms of conflict and non-conflict, and attempts to put light on the 'ideology' that permeates the text. In addition, the tool provides textual analysis of news articles based on the CDA methodologies.
by Muhammad Ali Hashmi.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

McCoy, Mason Eugene. "A Twitter-Based Prediction Tool for Digital Currency." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2302.

Повний текст джерела
Анотація:
Digital currencies (cryptocurrencies) are rapidly becoming commonplace in the global market. Trading is performed similarly to the stock market or commodities, but stock market prediction algorithms are not necessarily well-suited for predicting digital currency prices. In this work, we analyzed tweets with both an existing sentiment analysis package and a manually tailored "objective analysis," resulting in one impact value for each analysis per 15-minute period. We then used evolutionary techniques to select the most appropriate training method and the best subset of the generated features to include, as well as other parameters. This resulted in implementation of predictors which yielded much more profit in four-week simulations than simply holding a digital currency for the same time period--the results ranged from 28% to 122% profit. Unlike stock exchanges, which shut down for several hours or days at a time, digital currency prediction and trading seems to be of a more consistent and predictable nature.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lutero, Gianluca. "A Tool For Data Analysis Using Autoencoders." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20510/.

Повний текст джерела
Анотація:
In this thesis will be showed the design and development of SpeechTab, a web application that collects structured speech data from different subjects and a technique that try to tell which one is affected of cognitive decline or not.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Spies, Lucas Daniel. "Machine-Learning based tool to predict Tire Noise using both Tire and Pavement Parameters." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91407.

Повний текст джерела
Анотація:
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, new machine learning algorithms to model TPIN have been implemented. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research studied the correct configuration of such tools. More specifically, Artificial Neural Network (ANN) configurations were studied. Their implementation was based on the problem requirements (acoustic sound pressure prediction). Moreover, a customized neuron configuration showed improvements on the ANN TPIN prediction capabilities. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement profile when predicting TPIN. Finally, the new ANN configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed. Tire noise narrowband spectra for a frequency range of 400-1600 Hz is obtained as a result.
Master of Science
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, machine learning algorithms, based on the human brain structure, have been implemented to model TPIN. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research focused on the study of the correct configuration of such machine learning algorithms applied to the very specific task of TPIN prediction. Moreover, a customized configuration showed improvements on the TPIN prediction capabilities of these algorithms. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement roughness when predicting TPIN. Finally, the new machine learning algorithm configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete computational tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kim, Eun Young. "Machine-learning based automated segmentation tool development for large-scale multicenter MRI data analysis." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4998.

Повний текст джерела
Анотація:
Background: Volumetric analysis of brain structures from structural Mag- netic Resonance (MR) images advances the understanding of the brain by providing means to study brain morphometric changes quantitatively along aging, development, and disease status. Due to the recent increased emphasis on large-scale multicenter brain MR study design, the demand for an automated brain MRI processing tool has increased as well. This dissertation describes an automatic segmentation framework for subcortical structures of brain MRI that is robust for a wide variety of MR data. Method: The proposed segmentation framework, BRAINSCut, is an inte- gration of robust data standardization techniques and machine-learning approaches. First, a robust multi-modal pre-processing tool for automated registration, bias cor- rection, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. The segmentation framework was then constructed to achieve robustness for large-scale data via the following comparative experiments: 1) Find the best machine-learning algorithm among several available approaches in the field. 2) Find an efficient intensity normalization technique for the proposed region-specific localized normalization with a choice of robust statistics. 3) Find high quality features that best characterize the MR brain subcortical structures. Our tool is built upon 32 handpicked multi-modal muticenter MR images with man- ual traces of six subcortical structures (nucleus accumben, caudate nucleus, globus pallidum, putamen, thalamus, and hippocampus) from three experts. A fundamental task associated with brain MR image segmentation for re- search and clinical trials is the validation of segmentation accuracy. This dissertation evaluated the proposed segmentation framework in terms of validity and reliability. Three groups of data were employed for the various evaluation aspects: 1) traveling human phantom data for the multicenter reliability, 2) a set of repeated scans for the measurement stability across various disease statuses, and 3) a large-scale data from Huntington's disease (HD) study for software robustness as well as segmentation accuracy. Result: Segmentation accuracy of six subcortical structures was improved with 1) the bias-corrected inputs, 2) the two region-specific intensity normalization strategies and 3) the random forest machine-learning algorithm with the selected feature-enhanced image. The analysis of traveling human phantom data showed no center-specific bias in volume measurements from BRAINSCut. The repeated mea- sure reliability of the most of structures also displayed no specific association to disease progression except for caudate nucleus from the group of high risk for HD. The constructed segmentation framework was successfully applied on multicenter MR data from PREDICT-HD [133] study ( < 10% failure rate over 3000 scan sessions pro- cessed). Conclusion: Random-forest based segmentation method is effective and robust to large-scale multicenter data variation, especially with a proper choice of the intensity normalization techniques. Benefits of proper normalization approaches are more apparent compared to the custom set of feature-enhanced images for the ccuracy and robustness of the segmentation tool. BRAINSCut effectively produced subcortical volumetric measurements that are robust to center and disease status with validity confirmed by human experts and low failure rate from large-scale multicenter MR data. Sample size estimation, which is crutial for designing efficient clinical and research trials, is provided based on our experiments for six subcortical structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mukherjee, Anika. "Pattern Recognition and Machine Learning as a Morphology Characterization Tool for Assessment of Placental Health." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42731.

Повний текст джерела
Анотація:
Introduction: The placenta is a complex, disk-shaped organ vital to a successful pregnancy and responsible for materno-fetal exchange of vital gases and biochemicals. Instances of compromised placental development or function – collectively termed placenta dysfunction - underlies the most common and devastating pregnancy complications observed in North America, including preeclampsia (PE) and fetal growth restriction (FGR). A comprehensive histopathology examination of the placenta following delivery can help clarify obstetrical disease etiology and progression and offers tremendous potential in the identification of patients at risk of recurrence in subsequent pregnancies, as well as patients at high risk of chronic diseases in later life. However, these types of examinations require a high degree of specialized training and are resource intensive, limiting their availability to tertiary care centers in large city centres. The development of machine learning algorithms tailored to placenta histopathology applications may allow for automation and/or standardization of this important clinical exam – expanding its appropriate usage and impact on the health of mothers and infants. The primary objective of the current project is to develop and pilot the use of machine learning models capable of placental disease classification using digital histopathology images of the placenta. Methods: 1) A systematic review was conducted to identify the current methods being applied to automate histopathology screening to inform experimental design for later components of the project. Of 230 peer-reviewed articles retrieved in the search, 18 articles met all inclusion criteria and were used to develop guidelines for best practices. 2) To facilitate machine learning model development on placenta histopathology samples, a villi segmentation algorithm was developed to aid with feature extraction by providing objective metrics to automatically quantify microscopic placenta images. The segmentation algorithm applied colour clustering and a tophat transform to delineate the boundaries between neighbouring villi. 3) As a proof-of-concept, 2 machine learning algorithms were tested to evaluated their ability to predict the clinical outcome of preeclampsia (PE) using placental histopathology specimens collected through the Research Centre for Women’s and Infant’s Health (RCWIH) BioBank. The sample set included digital images from 50 cases of early onset PE, 29 cases of late onset PE and 69 controls with matching gestational ages. All images were pre-processed using patch extraction, colour normalization, and image transformations. Features of interest were extracted using: a) villi segmentation algorithm; b) SIFT keypoint descriptors (textural features); c) integrated feature extraction (in the context of deep learning model development). Using the different methods of feature extraction, two different machine learning approaches were compared - Support Vector Machine (SVM) and Convolutional Neural Network (CNN, deep learning). To track model improvement during training, cross validation on 20% of the total dataset was used (deep learning algorithm only) and the trained algorithms were evaluated on a test dataset (20% of the original dataset previously unseen by the model). Results: From the systematic review, 5 key steps were found to be essential for machine learning model development on histopathology images (image acquisition and preparation, image preprocessing, feature extraction, pattern recognition and classification model training, and model testing) and recommendations were provided for the optimal methods for each of the 5 steps. The segmentation algorithm was able to correctly identify individual villi with an F1 score of 80.76% - a significantly better performance than recently published methods. A maximum accuracy of 73% for the machine learning experiments was obtained when using textural features (SIFT keypoint descriptors) in an SVM model, using onset of PE disease (early vs. late) as the output classification of interest. Conclusion: Three major outcomes came of this project: 1) the range of methods available to develop automated screening tools for histopathology images with machine learning were consolidated and a set of best practices were proposed to guide future projects, 2) a villi segmentation tool was developed that can automatically segment all individual villi from an image and extract biologically relevant features that can be used in machine learning model development, and 3) a prototype machine learning classification tool for placenta histopathology was developed that was able to achieve moderate classification accuracy when distinguishing cases of early onset PE and late onset PE cases from controls. The collective body of work has made significant contributions to the fields of placenta pathology and computer vision, laying the foundation for significant progress aimed at integrating machine learning tools into the clinical setting of perinatal pathology.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jakob, Persson. "How to annotate in video for training machine learning with a good workflow." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187078.

Повний текст джерела
Анотація:
Artificial intelligence and machine learning is used in a lot of different areas, one of those areas is image recognition. In the production of a TV-show or film, image recognition can be used to help the editors to find specific objects, scenes, or people in the video content, which speeds up the production. But image recognition is not working perfect all the time and can not be used in the production of a TV-show or film as it is intended to. Therefore the image recognition algorithms needs to be trained on large datasets to become better. But to create these datasets takes time and tools that can let users create specific datasets and retrain algorithms to become better is needed. The aim of this master thesis was to investigate if it was possible to create a tool that can annotate objects and people in video content and using the data as training sets, and a tool that can retrain the output of an image recognition to make the image recognition become better. It was also important that the tools have a good workflow for the users. The study consisted of a theoretical study to gain more knowledge about annotation, and how to make a good UX-design with a good workflow. Interviews were also held to get more knowledge of what the requirements of the product was. It resulted in a user scenario and a workflow that was used together with the knowledge from the theoretical study to create a hi-fi prototype by using an iterative process with usability testing. This resulted in a final hi-fi prototype with a good design and a good workflow for the users, where it is possible to annotate objects and people with a bounding box, and where it is possible to retrain an image recognition program that has been used on video content.
Artificiell intelligens och maskininlärning används inom många olika områden, ett av dessa områden är bildigenkänning. Vid produktionen av ett TV-program eller av en film kan bildigenkänning användas för att hjälpa redigerarna att hitta specifika objekt, scener eller personer i videoinnehållet, vilket påskyndar produktionen. Men bildigenkänningsprogram fungerar inte alltid helt perfekt och kan inte användas i produktionen av ett TV-program eller film som det är tänkt att användas i det sammanhanget. För att förbättra bildigenkänningsprogram så behöver dess algoritm tränas på stora datasets av bilder och labels. Men att skapa dessa datasets tar tid och det behövs program som kan skapa datasets och återträna algoritmer för bildigenkänning så att de fungerar bättre. Syftet med detta examensarbete var att undersöka om det var möjligt att skapa ett verktyg som kan markera(annotera) objekt och personer i video och använda datat som träningsdata för algoritmer. Men även att skapa ett verktyg som kan återträna algoritmer för bildigenkänning så att de blir bättre utifrån datat man får från ett bildigenkänningprogram. Det var också viktigt att dessa verktyg hade ett bra arbetsflöde för användarna. Studien bestod av en teoretisk studie för att få mer kunskap om annoteringar i video och hur man skapar bra UX-design med ett bra arbetsflöde. Intervjuer hölls också för att få mer kunskap om kraven på produkten och vilka som skulle använda den. Det resulterade i ett användarscenario och ett arbetsflöde som användes tillsammans med kunskapen från den teoretiska studien för att skapa en hi-fi prototyp, där en iterativ process med användbarhetstestning användes. Detta resulterade i en slutlig hi-fi prototyp med bra design och ett bra arbetsflöde för användarna där det är möjligt att markera(annotera) objekt och personer med en bounding box och där det är möjligt att återträna algoritmer för bildigenkänning som har körts på video.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Massaccesi, Luciano. "Machine Learning Software for Automated Satellite Telemetry Monitoring." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20502/.

Повний текст джерела
Анотація:
During the lifetime of a satellite malfunctions may occur. Unexpected behaviour are monitored using sensors all over the satellite. The telemetry values are then sent to Earth and analysed seeking for anomalies. These anomalies could be detected by humans, but this is considerably expensive. To lower the costs, machine learning techniques can be applied. In this research many diferent machine learning techniques are tested and compared using satellite telemetry data provided by OHB System AG. The fact that the anomalies are collective, together with some data properties, is exploited to improve the performances of the machine learning algorithms. Since the data comes from a real spacecraft, it presents some defects. The data covers in fact a small time-lapse and does not present critical anomalies due to the spacecraft healthiness. Some steps are then taken to improve the evaluation of the algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Goteti, Aniruddh. "Machine Learning Approach to the Design of Autonomous Construction Equipment applying Data-Driven Decision Support Tool." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17635.

Повний текст джерела
Анотація:
Design engineers working in construction machinery industry face a lot of complexities and uncertainties while taking important decisions during the design of construction equipment. These complexities can be reduced by the implementation of a data-driven decision support tool, which can predict the behaviour of the machine in operational complexity and give some valuable insights to the design engineer. This data-driven decision support tool must be supported by a suitable machine algorithm. The focus of this thesis is to find a suitable machine algorithm, which can predict the behaviour of a machine and can later be involved in the development of such data-driven decision-support tools. In finding such a solution, evaluation of the regression performance of four supervised machine learning regression algorithms, namely SupportVector Machine Regression, Bayesian Ridge Regression, DecisionTree Regression and Random Forest Regression, is done. The evaluation was done on the data-sets personally observed/collected at the site which was extracted from the autonomous construction machine byProduct Development Research Lab (P.D.R.L). An experiment is chosen as a research methodology based on the quantitative format of the data set. The sensor data extracted from the autonomous machine in time series format, which in turn is converted to supervised data with the help of the sliding window method. The four chosen algorithms are then trained on the mentioned data-sets and are evaluated with certain performance metrics (MSE, RMSE, MAE, Training Time). Based on the rigorous data collection, experimentation and analysis, Bayesian Ridge Regressor is found to be the best compared with other algorithms in terms of all performance metrics and is chosen as the optimal algorithm to be used in the development of data-driven decision support tool meant for design engineers working in the construction industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Podapati, Sasidhar. "Fitness Function for a Subscriber." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13953.

Повний текст джерела
Анотація:
Mobile communication has become a vital part of modern communication. The cost of network infrastructure has become a deciding factor with rise in mobile phone usage. Subscriber mobility patterns have major effect on load of radio cell in the network. The need for data analysis of subscriber mobility data is of utmost priority. The paper aims at classifying the entire dataset provided by Telenor, into two main groups i.e. Infrastructure stressing and Infrastructure friendly with respect to their impact on the mobile network. The research aims to predict the behavior of new subscriber based on his MOSAIC group. A heuristic method is formulated to characterize the subscribers into three different segments based on their mobility. Tetris Optimization is used to reveal the “Infrastructure Stressing” subscribers in the mobile network. All the experiments have been conducted on the subscriber trajectory data provided by the telecom operator. The results from the experimentation reveal that 5 percent of subscribers from entire data set are “Infrastructure Stressing”. A classification model is developed and evaluated to label the new subscriber as friendly or stressing using WEKA machine learning tool. Naïve Bayes, k-nearest neighbor and J48 Decision tree are classification algorithms used to train the model and to find the relation between features in the labeled subscriber dataset
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Knoth, Stefanie. "Topic Explorer Dashboard : A Visual Analytics Tool for an Innovation Management System enhanced by Machine Learning Techniques." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105981.

Повний текст джерела
Анотація:
Innovation Management Software contains complex data with many different variables. This data is usually presented in tabular form or with isolated graphs that visualize a single independent aspect of a dataset. However, displaying this data with interconnected, interactive charts provide much more flexibility and opportunities for working with and understanding the data. Charts that show multiple aspects of the data at once can help in uncovering hidden relationships between different aspects of the data and in finding new insights that might be difficult to see with the traditional way of displaying data. The size and complexity of the available data also invites analyzing it with machine learning techniques. In this thesis it is first explored how machine learning techniques can be used to gain additional insight from the data and then the results of this investigation are used together with the original data in order to build a prototypical dashboard for exploratory visual data analysis. This dashboard is then evaluated by means of ICE-T heuristics and the results and findings are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Evans, Steven William. "Groundwater Level Mapping Tool: Development of a Web Application to Effectively Characterize Groundwater Resources." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7738.

Повний текст джерела
Анотація:
Groundwater is used worldwide as a major source for agricultural irrigation, industrial processes, mining, and drinking water. An accurate understanding of groundwater levels and trends is essential for decision makers to effectively manage groundwater resources throughout an aquifer, ensuring its sustainable development and usage. Unfortunately, groundwater is one of the most challenging and expensive water resources to characterize, quantify, and monitor on a regional basis. Data, though present, are often limited or sporadic, and are generally not used to their full potential to aid decision makers in their groundwater management.This thesis presents a solution to this under-utilization of available data through the creation of an open-source, Python-based web application used to characterize, visualize, and quantify groundwater resources on a regional basis. This application includes tools to extrapolate and interpolate time series observations of groundwater levels in monitoring wells through multi-linear regression, using correlated data from other wells. It is also possible to extrapolate time series observations using machine learning techniques with Earth observations as inputs. The app also performs spatial interpolation using GSLIB Kriging code. Combining the results of spatial and temporal interpolation, the app enables the user to calculate changes in aquifer storage, and to produce and view aquifer-wide maps and animations of groundwater levels over time. This tool will provide decision makers with an easy to use and easy to understand method for tracking groundwater resources. Thus far, this tool has been used to map groundwater in Texas, Utah, South Africa, Colombia, and the Dominican Republic.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Kottorp, Max, and Filip Jäderberg. "Chatbot As a Potential Tool for Businesses : A study on chatbots made in collaboration with Bisnode." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210768.

Повний текст джерела
Анотація:
The investigation aims to provide an answer to if a chatbot is a potential complement to an internal service desk of a company. The work has centered around developing a chatbot able to handle simple Q&A-interaction of the internal service desk of Bisnode, the company in question. The chatbot acted as an proof of concept, which then was tested by 15 individuals. The testing was done with pre- defined user scenarios, where the test person ultimately had to fill in a questionnaire with statements related to the overall experience. By summarizing the user evaluations from the questionnaires, combined with an SWOT analysis, the work concluded that a chatbot is indeed a potential complement to an internal service desk of a company, if it handles Q&A-interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Fiebrink, Rebecca. "An exploration of feature selection as a tool for optimizing musical genre classification /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99372.

Повний текст джерела
Анотація:
The computer classification of musical audio can form the basis for systems that allow new ways of interacting with digital music collections. Existing music classification systems suffer, however, from inaccuracy as well as poor scalability. Feature selection is a machine-learning tool that can potentially improve both accuracy and scalability of classification. Unfortunately, there is no consensus on which feature selection algorithms are most appropriate or on how to evaluate the effectiveness of feature selection. Based on relevant literature in music information retrieval (MIR) and machine learning and on empirical testing, the thesis specifies an appropriate evaluation method for feature selection, employs this method to compare existing feature selection algorithms, and evaluates an appropriate feature selection algorithm on the problem of musical genre classification. The outcomes include an increased understanding of the potential for feature selection to benefit MIR and a new technique for optimizing one type of classification-based system.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Molinar, Torres Gabriela Alejandra [Verfasser]. "Machine Learning Tool for Transmission Capacity Forecasting of Overhead Lines based on Distributed Weather Data / Gabriela Alejandra Molinar Torres." Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1223985873/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lee, Ji Hyun. "Development of a Tool to Assist the Nuclear Power Plant Operator in Declaring a State of Emergency Based on the Use of Dynamic Event Trees and Deep Learning Tools." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ayuso, Anna Maria E. "Automation of Drosophila gene expression pattern image annotation : development of web-based image annotation tool and application of machine learning methods." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66403.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 91-92).
Large-scale in situ hybridization screens are providing an abundance of spatio-temporal patterns of gene expression data that is valuable for understanding the mechanisms of gene regulation. Drosophila gene expression pattern images have been generated by the Berkeley Drosophila Genome Project (BDGP) for over 7,000 genes in over 90,000 digital images. These images are currently hand curated by field experts with developmental and anatomical terms based on the stained regions. These annotations enable the integration of spatial expression patterns with other genomic data sets that link regulators with their downstream targets. However, the manual curation has become a bottleneck in the process of analyzing the rapidly generated data therefore it is necessary to explore computational methods for the curation of gene expression pattern images. This thesis addresses improving the manual annotation process with a web-based image annotation tool and also enabling automation of the process using machine learning methods. First, a tool called LabelLife was developed to provide a systematic and flexible way of annotating images, groups of images, and shapes within images using terms from a controlled vocabulary. Second, machine learning methods for automatically predicting vocabulary terms for a given image based on image feature data were explored and implemented. The results of the applied machine learning methods are promising in terms of predictive ability, which has the potential to simplify and expedite the curation process hence increasing the rate that biologically significant data can be evaluated and new insights can be gained.
by Anna Maria E. Ayuso.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Giulianini, Luca. "Progettazione e sviluppo di un tool di supporto alla rilevazione di alterazioni digitali in immagini del volto." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23404/.

Повний текст джерела
Анотація:
Il concetto di identità è un concetto estremamente importante per il genere umano. L’identità, infatti, si fonda sulle caratteristiche peculiari che contraddistinguono ciascun individuo e che lo differenziano rispetto ad un altro rendendolo un soggetto unico e irripetibile. Sebbene le caratteristiche che compongano l'identità di una persona siano numerose ed estremamente differenti, si è generalmente concordi nel definire le caratteristiche fisiche come primarie nell'atto del riconoscimento personale. Per questo motivo, nella storia dell'uomo, lo sviluppo di metodologie dedicate all'identificazione si sono rivolte sempre più all'ambito fisiologico umano piuttosto che ad uno più comportamentale, culminando ai giorni nostri nei più moderni sistemi di riconoscimento biometrico. Queste tipologie di sistemi hanno assunto una dimensione pervasiva soprattutto in contesti dove i controlli umani risultato spesso complessi e limitati. Con l'avvento della pandemia di Covid-19 un numero sempre maggiore di aeroporti ed aziende ha accelerato i propri investimenti in soluzioni biometriche, motivati dalla necessità di velocizzare gli accessi minimizzando i contatti fra individui. Malgrado molti abbiano appreso questa notizia con grande entusiasmo, in letteratura esistono una serie di ricerche che mostrano come questi sistemi, sebbene siano robusti sotto scenari controllati, possano essere attualmente soggetti ad attacchi.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mao, Jin, Lisa R. Moore, Carrine E. Blank, Elvis Hsin-Hui Wu, Marcia Ackerman, Sonali Ranade, and Hong Cui. "Microbial phenomics information extractor (MicroPIE): a natural language processing tool for the automated acquisition of prokaryotic phenotypic characters from text sources." BIOMED CENTRAL LTD, 2016. http://hdl.handle.net/10150/622562.

Повний текст джерела
Анотація:
Background: The large-scale analysis of phenomic data (i.e., full phenotypic traits of an organism, such as shape, metabolic substrates, and growth conditions) in microbial bioinformatics has been hampered by the lack of tools to rapidly and accurately extract phenotypic data from existing legacy text in the field of microbiology. To quickly obtain knowledge on the distribution and evolution of microbial traits, an information extraction system needed to be developed to extract phenotypic characters from large numbers of taxonomic descriptions so they can be used as input to existing phylogenetic analysis software packages. Results: We report the development and evaluation of Microbial Phenomics Information Extractor (MicroPIE, version 0.1.0). MicroPIE is a natural language processing application that uses a robust supervised classification algorithm (Support Vector Machine) to identify characters from sentences in prokaryotic taxonomic descriptions, followed by a combination of algorithms applying linguistic rules with groups of known terms to extract characters as well as character states. The input to MicroPIE is a set of taxonomic descriptions (clean text). The output is a taxon-by-character matrix-with taxa in the rows and a set of 42 pre-defined characters (e.g., optimum growth temperature) in the columns. The performance of MicroPIE was evaluated against a gold standard matrix and another student-made matrix. Results show that, compared to the gold standard, MicroPIE extracted 21 characters (50%) with a Relaxed F1 score > 0.80 and 16 characters (38%) with Relaxed F1 scores ranging between 0.50 and 0.80. Inclusion of a character prediction component (SVM) improved the overall performance of MicroPIE, notably the precision. Evaluated against the same gold standard, MicroPIE performed significantly better than the undergraduate students. Conclusion: MicroPIE is a promising new tool for the rapid and efficient extraction of phenotypic character information from prokaryotic taxonomic descriptions. However, further development, including incorporation of ontologies, will be necessary to improve the performance of the extraction for some character types.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Rosa, Simone. "Analisi dei segnali vibratori di una macchina utensile per brocciatura." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Знайти повний текст джерела
Анотація:
La manutenzione predittiva è un elemento chiave dell’industria 4.0. Il monitoraggio in continuo delle condizioni di un asset infatti, si sposa alla perfezione con gli elementi fondamentali di quest’ultima quali Big Data ed Internet of Things. Nonostante la popolarità di cui gode da diversi anni non è facile trovare applicazioni concrete di manutenzione basata su condizione (Condition Based Maintenance CBM), in particolare per il monitoraggio dll’utensile nelle operazioni di broccaitura. Questo lavoro si suddivide in due sezioni principali. Nella prima parte si cerca di riassumere brevemente lo sviluppo delle strategie di manutenzione fino allo stato attuale per poi fornire un quadro della diffusione della manutenzione predittiva in ambito industriale e dei suoi benefici. Nella seconda parte viene presentata un’analisi dei segnali vibratori acquisiti da una macchina utensile per brocciatura ad elevata produttivià, al fine di verificare la fattibilità un sistema che riconosca eventuali danni all’utensile basandosi unicamente sulle vibrazioni.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Modugula, Venkateswarulu Yashwanth Krishna, and Hegde Raghavendra Shridhar. "Costs & Benefits of an AI/IT Tool for the Swedish Antibiotics Supply Chain : An AI/IT Tool to address shortages of Antibiotics in Sweden." Thesis, Uppsala universitet, Institutionen för samhällsbyggnad och industriell teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-433911.

Повний текст джерела
Анотація:
Sweden faces shortages in antibiotics. Shortages are caused due to a variety of reasons. Due to low profit margins and opportunity costs, antibiotic supply chains may experience a lack of competition. Lack of competition across the various stages of supply chains leads to fragility in the supply chain which ultimately results in shortages. Lack of communication is another such factor leading to shortages. Incorporating an AI/IT system across the supply chain would help prevent the occurrence of shortages by addressing such factors.  PLATINEA, an innovation platform, aims to address the threat of anti-microbial resistance by ensuring a steady supply of antibiotics. Their work package 4 is dedicated to eliminating risk factors or causes of shortages that arise from supply chains of antibiotics. PLATINEA has drafted a mind map to identify the risk factors or causes of shortages in Sweden. This thesis revolves around conducting a cost benefit analysis for implementing an AI/IT tool that addresses the risk factors and causes of shortages identified from the mind map that stem from the Swedish supply chains of antibiotics. A model consisting of a breakdown in costs and benefits was created. The model not only helped us frame the various costs and benefits, but also evolved during the research to help us structure our results better. An AI/IT tool has been devised keeping the risk factors and causes of shortage in mind. This tool has four versions that have varying levels of integration and automation. Semi-structured Interviews were conducted with experts in the field of artificial Intelligence and machine learning. calculation based on historical data were made to determine costs of shortages and to some extent, visualize the extent of costs involved in antibiotic resistance. Based on the information gathered from the interviews and literatures, the costs and benefits identified in the model are addressed, including the significant benefit of reducing cost of shortages.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Johansson, Richard, and Heino Otto Engström. "Topic propagation over time in internet security conferences : Topic modeling as a tool to investigate trends for future research." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177748.

Повний текст джерела
Анотація:
When conducting research, it is valuable to find high-ranked papers closely related to the specific research area, without spending too much time reading insignificant papers. To make this process more effective an automated process to extract topics from documents would be useful, and this is possible using topic modeling. Topic modeling can also be used to provide topic trends, where a topic is first mentioned, and who the original author was. In this paper, over 5000 articles are scraped from four different top-ranked internet security conferences, using a web scraper built in Python. From the articles, fourteen topics are extracted, using the topic modeling library Gensim and LDA Mallet, and the topics are visualized in graphs to find trends about which topics are emerging and fading away over twenty years. The result found in this research is that topic modeling is a powerful tool to extract topics, and when put into a time perspective, it is possible to identify topic trends, which can be explained when put into a bigger context.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Engelmann, James E. "An Information Management and Decision Support tool for Predictive Alerting of Energy for Aircraft." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1595779161412401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Vargas, Gonzalez Andres. "SketChart: A Pen-Based Tool for Chart Generation and Interaction." Master's thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6375.

Повний текст джерела
Анотація:
It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype's user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement.
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Berkman, Anton, and Gustav Andersson. "Predicting the impact of prior physical activity on shooting performance." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-46851.

Повний текст джерела
Анотація:
The objectives of this thesis were to develop a machine learning tool-chain and to investigate the relationship between heart rate and trigger squeeze and shooting accuracy when firing a handgun in a simulated environment. There are several aspects that affects the accuracy of a shooter. To accelerate the learning process and to complement the instructors, different sensors can be used by the shooter. By extracting sensor data and presenting this to the shooter in real-time the rate of improvement can potentially be accelerated. An experiment which replicated precision shooting was conducted at SAAB AB using their GC-IDT simulator. 14 participants with experience ranging from zero to over 30 years participated. The participants were randomly divided into two groups where one group started the experiment with a heart rate of at least 150 beats per minute. The iTouchGlove2.3 was used to measure trigger squeeze and Polar H10 heart rate belt was used to measure heart rate. Random forest regression was then used to predict accuracy on the data collected from the experiment. A machine learning tool-chain was successfully developed to process raw sensor data which was then used by a random forest regression algorithm to form a prediction. This thesis provides insights and guidance for further experimental explorations of handgun exercises and shooting performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Suleiman, Iyad. "Integrating data mining and social network techniques into the development of a Web-based adaptive play-based assessment tool for school readiness." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/7293.

Повний текст джерела
Анотація:
A major challenge that faces most families is effectively anticipating how ready to start school a given child is. Traditional tests are not very effective as they depend on the skills of the expert conducting the test. It is argued that automated tools are more attractive especially when they are extended with games capabilities that would be the most attractive for the children to be seriously involved in the test. The first part of this thesis reviews the school readiness approaches applied in various countries. This motivated the development of the sophisticated system described in the thesis. Extensive research was conducted to enrich the system with features that consider machine learning and social network aspects. A modified genetic algorithm was integrated into a web-based stealth assessment tool for school readiness. The research goal is to create a web-based stealth assessment tool that can learn the user's skills and adjust the assessment tests accordingly. The user plays various sessions from various games, while the Genetic Algorithm (GA) selects the upcoming session or group of sessions to be presented to the user according to his/her skills and status. The modified GA and the learning procedure were described. A penalizing system and a fitness heuristic for best choice selection were integrated into the GA. Two methods for learning were presented, namely a memory system and a no-memory system. Several methods were presented for the improvement of the speed of learning. In addition, learning mechanisms were introduced in the social network aspect to address further usage of stealth assessment automation. The effect of the relatives and friends on the readiness of the child was studied by investigating the social communities to which the child belongs and how the trend in these communities will reflect on to the child under investigation. The plan is to develop this framework further by incorporating more information related to social network construction and analysis. Also, it is planned to turn the framework into a self adaptive one by utilizing the feedback from the usage patterns to learn and adjust the evaluation process accordingly.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Lallé, Sébastien. "Assistance à la construction et à la comparaison de techniques de diagnostic des connaissances." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM042/document.

Повний текст джерела
Анотація:
Cette thèse aborde la thématique de la comparaison et de la construction de diagnostics des connaissances dans les Environnements Informatiques pour l'Apprentissage Humain (EIAH). Ces diagnostics sont utilisés pour déterminer si les apprenants maîtrisent ou non les connaissances ou conceptions du domaine d'apprentissage (par exemple math au collège) à partir des traces collectées par l'EIAH. Bien que ces diagnostics soient récurrents dans les EIAH, ils sont fortement liés au domaine et ne sont que peu formalisés, si bien qu'il n'existe pas de méthode de comparaison pour les positionner entre eux et les valider. Pour la même raison, utiliser un diagnostic dans deux domaines différents implique souvent de le redévelopper en partie ou en totalité, sans réelle réutilisation. Pourtant, pouvoir comparer et réutiliser des diagnostics apporterait aux concepteurs d'EIAH plus de rigueur pour le choix, l'évaluation et le développement de ces diagnostics. Nous proposons une méthode d'assistance à la construction et à la comparaison de diagnostics des connaissances, réifiée dans une première plateforme, en se basant sur une formalisation du diagnostic des connaissances en EIAH que nous avons défini et sur l'utilisation de traces d'apprenant. L'assistance à la construction se fait via un algorithme d'apprentissage semi-automatique, guidé par le concepteur du diagnostic grâce à une ontologie décrivant les traces et les connaissances du domaine d'apprentissage. L'assistance à la comparaison se fait par application d'un ensemble de critères de comparaison (statistiques ou spécifiques aux EIAH) sur les résultats des différents diagnostics construits. La principale contribution au domaine est la généricité de notre méthode, applicable à un ensemble de diagnostics différents pour tout domaine d'apprentissage. Nous évaluons notre travail à travers trois expérimentations. La première porte sur l'application de la méthode à trois domaines différents (géométrie, lecture, chirurgie) en utilisant des jeux de traces en validation croisée pour construire et appliquer les critères de comparaison sur cinq diagnostics différents. La seconde expérimentation porte sur la spécification et l'implémentation d'un nouveau critère de comparaison spécifique aux EIAH : la comparaison des diagnostics en fonction de leur impact sur une prise de décision de l'EIAH, le choix d'un type d'aide à donner à l'apprenant. La troisième expérimentation traite de la spécification et de l'ajout d'un nouveau diagnostic dans notre plateforme, en collaborant avec une didacticienne
Comparing and building knowledge diagnostic is a challenge in the field of Technology Enhanced Learning (TEL) systems. Knowledge diagnostic aims to infer the knowledge mastered or not by a student in a given learning domain (like mathematics for high school) using student traces recorded by the TEL system. Knowledge diagnostics are widely used, but they strongly depend on the learning domain and are not well formalized. Thus, there exists no method or tool to build, compare and evaluate different diagnostics applied on a given learning domain. Similarly, using a diagnostic in two different domain usually imply to implementing almost both from scratch. Yet, comparing and reusing knowledge diagnostics can lead to reduce the engineering cost, to reinforce the evaluation and finally help knowledge diagnostic designers to choose a diagnostic. We propose a method, refine in a first platform, to assist knowledge diagnostic designers to build and compare knowledge diagnostics, using a new formalization of the diagnostic and student traces. To help building diagnostics, we used a semi-automatic machine learning algorithm, guided by an ontology of the traces and the knowledge designed by the designer. To help comparing diagnostics, we use a set of comparison criteria (either statistical or specific to the field of TEL systems) applied on the results of each diagnostic on a given set of traces. The main contribution is that our method is generic over diagnostics, meaning that very different diagnostics can be built and compared, unlike previous work on this topic. We evaluated our work though three experiments. The first one was about applying our method on three different domains and set of traces (namely geometry, reading and surgery) to build and compare five different knowledge diagnostics in cross validation. The second experiment was about designing and implementing a new comparison criteria specific to TEL systems: the impact of knowledge diagnostic on a pedagogical decision, the choice of a type of help to give to a student. The last experiment was about designing and adding in our platform a new diagnostic, in collaboration with an expert in didactic
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Dietrich, Stefan [Verfasser], Heiner [Akademischer Betreuer] Boeing, Heiner [Gutachter] Boeing, and Dagmar [Gutachter] Drogan. "Investigation of the machine learning method Random Survival Forest as an exploratory analysis tool for the identification of variables associated with disease risks in complex survival data / Stefan Dietrich ; Gutachter: Heiner Boeing, Dagmar Drogan ; Betreuer: Heiner Boeing." Berlin : Technische Universität Berlin, 2016. http://d-nb.info/1156334772/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Kanwar, John. "Smart cropping tools with help of machine learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74827.

Повний текст джерела
Анотація:
Machine learning has been around for a long time, the applications range from a big variety of different subjects, everything from self driving cars to data mining. When a person takes a picture with its mobile phone it easily happens that the photo is a little bit crooked. It does also happen that people takes spontaneous photos with help of their phones, which can result in something irrelevant ending up in the corner of the image. This thesis combines machine learning with photo editing tools. It will explore the possibilities how machine learning can be used to automatically crop images in an aesthetically pleasing way and how machine learning can be used to create a portrait cropping tool. It will also go through how a straighten out function can be implemented with help of machine learning. At last, it is going to compare this tools with other software automatic cropping tools.
Maskinlärning har funnits en lång tid. Deras jobb varierar från flera olika ämnen. Allting från självkörande bilar till data mining. När en person tar en bild med en mobiltelefon händer det lätt att bilden är lite sned. Det händer också att en tar spontana bilder med sin mobil, vilket kan leda till att det kommer med något i kanten av bilden som inte bör vara där. Det här examensarbetet kombinerar maskinlärning med fotoredigeringsverktyg. Det kommer att utforska möjligheterna hur maskinlärning kan användas för att automatiskt beskära bilder estetsikt tilltalande samt hur maskinlärning kan användas för att skapa ett porträttbeskärningsverktyg. Det kommer även att gå igenom hur en räta-till-funktion kan bli implementerad med hjälp av maskinlärning. Till sist kommer det att jämföra dessa verktyg med andra programs automatiska beskärningsverktyg.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Nordin, Alexander Friedrich. "End to end machine learning workflow using automation tools." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119776.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
We have developed an open source library named Trane and integrated it with two open source libraries to build an end-to-end machine learning workflow that can facilitate rapid development of machine learning models. The three components of this workflow are Trane, Featuretools and ATM. Trane enumerates tens of prediction problems relevant to any dataset using the meta information about the data. Furthermore, Trane generates training examples required for training machine learning models. Featuretools is an open-source software for automatically generating features from a dataset. Auto Tune Models (ATM), an open source library, performs a high throughput search over modeling options to find the best modeling technique for a problem. We show the capability of these three tools and highlight the open-source development of Trane.
by Alexander Friedrich Nordin.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Jalali, Mana. "Voltage Regulation of Smart Grids using Machine Learning Tools." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/95962.

Повний текст джерела
Анотація:
Smart inverters have been considered the primary fast solution for voltage regulation in power distribution systems. Optimizing the coordination between inverters can be computationally challenging. Reactive power control using fixed local rules have been shown to be subpar. Here, nonlinear inverter control rules are proposed by leveraging machine learning tools. The designed control rules can be expressed by a set of coefficients. These control rules can be nonlinear functions of both remote and local inputs. The proposed control rules are designed to jointly minimize the voltage deviation across buses. By using the support vector machines, control rules with sparse representations are obtained which decrease the communication between the operator and the inverters. The designed control rules are tested under different grid conditions and compared with other reactive power control schemes. The results show promising performance.
With advent of renewable energies into the power systems, innovative and automatic monitoring and control techniques are required. More specifically, voltage regulation for distribution grids with solar generation is a can be a challenging task. Moreover, due to frequency and intensity of the voltage changes, traditional utility-owned voltage regulation equipment are not useful in long term. On the other hand, smart inverters installed with solar panels can be used for regulating the voltage. Smart inverters can be programmed to inject or absorb reactive power which directly influences the voltage. Utility can monitor, control and sync the inverters across the grid to maintain the voltage within the desired limits. Machine learning and optimization techniques can be applied for automation of voltage regulation in smart grids using the smart inverters installed with solar panels. In this work, voltage regulation is addressed by reactive power control.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Viswanathan, Srinidhi. "ModelDB : tools for machine learning model management and prediction storage." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113540.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Building a machine learning model is often an iterative process. Data scientists train hundreds of models before finding a model that meets acceptable criteria. But tracking these models and remembering the insights obtained from them is an arduous task. In this thesis, we present two main systems for facilitating better tracking, analysis, and querying of scikit-learn machine learning models. First, we introduce our scikit-learn client for ModelDB, a novel end-to-end system for managing machine learning models. The client allows data scientists to easily track diverse scikit-learn workflows with minimal changes to their code. Then, we describe our extension to ModelDB, PredictionStore. While the ModelDB client enables users to track the different models they have run, PredictionStore creates a prediction matrix to tackle the remaining piece in the puzzle: facilitating better exploration and analysis of model performance. We implement a query API to assist in analyzing predictions and answering nuanced questions about models. We also implement a variety of algorithms to recommend particular models to ensemble utilizing the prediction matrix. We evaluate ModelDB and PredictionStore on different datasets and determine ModelDB successfully tracks scikit-learn models, and most complex model queries can be executed in a matter of seconds using our query API. In addition, the workflows demonstrate significant improvement in accuracy using the ensemble algorithms. The overall goal of this research is to provide a flexible framework for training scikit-learn models, storing their predictions/ models, and efficiently exploring and analyzing the results.
by Srinidhi Viswanathan.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Borodavkina, Lyudmila 1977. "Investigation of machine learning tools for document clustering and classification." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/8932.

Повний текст джерела
Анотація:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 57-59).
Data clustering is a problem of discovering the underlying data structure without any prior information about the data. The focus of this thesis is to evaluate a few of the modern clustering algorithms in order to determine their performance in adverse conditions. Synthetic Data Generation software is presented as a useful tool both for generating test data and for investigating results of the data clustering. Several theoretical models and their behavior are discussed, and, as the result of analysis of a large number of quantitative tests, we come up with a set of heuristics that describe the quality of clustering output in different adverse conditions.
by Lyudmila Borodavkina.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Song, Qi. "Developing machine learning tools to understand transcriptional regulation in plants." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93512.

Повний текст джерела
Анотація:
Abiotic stresses constitute a major category of stresses that negatively impact plant growth and development. It is important to understand how plants cope with environmental stresses and reprogram gene responses which in turn confers stress tolerance. Recent advances of genomic technologies have led to the generation of much genomic data for the model plant, Arabidopsis. To understand gene responses activated by specific external stress signals, these large-scale data sets need to be analyzed to generate new insight of gene functions in stress responses. This poses new computational challenges of mining gene associations and reconstructing regulatory interactions from large-scale data sets. In this dissertation, several computational tools were developed to address the challenges. In Chapter 2, ConSReg was developed to infer condition-specific regulatory interactions and prioritize transcription factors (TFs) that are likely to play condition specific regulatory roles. Comprehensive investigation was performed to optimize the performance of ConSReg and a systematic recovery of nitrogen response TFs was performed to evaluate ConSReg. In Chapter 3, CoReg was developed to infer co-regulation between genes, using only regulatory networks as input. CoReg was compared to other computational methods and the results showed that CoReg outperformed other methods. CoReg was further applied to identified modules in regulatory network generated from DAP-seq (DNA affinity purification sequencing). Using a large expression dataset generated under many abiotic stress treatments, many regulatory modules with common regulatory edges were found to be highly co-expressed, suggesting that target modules are structurally stable modules under abiotic stress conditions. In Chapter 4, exploratory analysis was performed to classify cell types for Arabidopsis root single cell RNA-seq data. This is a first step towards construction of a cell-type-specific regulatory network for Arabidopsis root cells, which is important for improving current understanding of stress response.
Doctor of Philosophy
Abiotic stresses constitute a major category of stresses that negatively impact plant growth and development. It is important to understand how plants cope with environmental stresses and reprogram gene responses which in turn confers stress tolerance to plants. Genomics technology has been used in past decade to generate gene expression data under different abiotic stresses for the model plant, Arabidopsis. Recent new genomic technologies, such as DAP-seq, have generated large scale regulatory maps that provide information regarding which gene has the potential to regulate other genes in the genome. However, this technology does not provide context specific interactions. It is unknown which transcription factor can regulate which gene under a specific abiotic stress condition. To address this challenge, several computational tools were developed to identify regulatory interactions and co-regulating genes for stress response. In addition, using single cell RNA-seq data generated from the model plant organism Arabidopsis, preliminary analysis was performed to build model that classifies Arabidopsis root cell types. This analysis is the first step towards the ultimate goal of constructing cell-typespecific regulatory network for Arabidopsis, which is important for improving current understanding of stress response in plants.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Deng, Lihua [Verfasser]. "Understanding Toll-like Receptor Modulation Through Machine Learning / Lihua Deng." Berlin : Freie Universität Berlin, 2021. http://d-nb.info/1234984652/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Nagler, Dylan Jeremy. "SCHUBOT: Machine Learning Tools for the Automated Analysis of Schubert’s Lieder." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:12705172.

Повний текст джерела
Анотація:
This paper compares various methods for automated musical analysis, applying machine learning techniques to gain insight about the Lieder (art songs) of com- poser Franz Schubert (1797-1828). Known as a rule-breaking, individualistic, and adventurous composer, Schubert produced hundreds of emotionally-charged songs that have challenged music theorists to this day. The algorithms presented in this paper analyze the harmonies, melodies, and texts of these songs. This paper begins with an exploration of the relevant music theory and ma- chine learning algorithms (Chapter 1), alongside a general discussion of the place Schubert holds within the world of music theory. The focus is then turned to automated harmonic analysis and hierarchical decomposition of MusicXML data, presenting new algorithms for phrase-based analysis in the context of past research (Chapter 2). Melodic analysis is then discussed (Chapter 3), using unsupervised clustering methods as a complement to harmonic analyses. This paper then seeks to analyze the texts Schubert chose for his songs in the context of the songs’ relevant musical features (Chapter 4), combining natural language processing with feature extraction to pinpoint trends in Schubert’s career.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Parikh, Neena (Neena S. ). "Interactive tools for fantasy football analytics and predictions using machine learning." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100687.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-84).
The focus of this project is multifaceted: we aim to construct robust predictive models to project the performance of individual football players, and we plan to integrate these projections into a web-based application for in-depth fantasy football analytics. Most existing statistical tools for the NFL are limited to the use of macro-level data; this research looks to explore statistics at a finer granularity. We explore various machine learning techniques to develop predictive models for different player positions including quarterbacks, running backs, wide receivers, tight ends, and kickers. We also develop an interactive interface that will assist fantasy football participants in making informed decisions when managing their fantasy teams. We hope that this research will not only result in a well-received and widely used application, but also help pave the way for a transformation in the field of football analytics.
by Neena Parikh.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Green, Pamela Dilys. "Extracting group relationships within changing software using text analysis." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11896.

Повний текст джерела
Анотація:
This research looks at identifying and classifying changes in evolving software by making simple textual comparisons between groups of source code files. The two areas investigated are software origin analysis and collusion detection. Textual comparison is attractive because it can be used in the same way for many different programming languages. The research includes the first major study using machine learning techniques in the domain of software origin analysis, which looks at the movement of code in an evolving system. The training set for this study, which focuses on restructured files, is created by analysing 89 software systems. Novel features, which capture abstract patterns in the comparisons between source code files, are used to build models which classify restructured files fromunseen systems with a mean accuracy of over 90%. The unseen code is not only in C, the language of the training set, but also in Java and Python, which helps to demonstrate the language independence of the approach. As well as generating features for the machine learning system, textual comparisons between groups of files are used in other ways throughout the system: in filtering to find potentially restructured files, in ranking the possible destinations of the code moved from the restructured files, and as the basis for a new file comparison tool. This tool helps in the demanding task of manually labelling the training data, is valuable to the end user of the system, and is applicable to other file comparison tasks. These same techniques are used to create a new text-based visualisation for use in collusion detection, and to generate a measure which focuses on the unusual similarity between submissions. This measure helps to overcome problems in detecting collusion in data where files are of uneven size, where there is high incidental similarity or where more than one programming language is used. The visualisation highlights interesting similarities between files, making the task of inspecting the texts easier for the user.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Dubey, Anshul. "Search and Analysis of the Sequence Space of a Protein Using Computational Tools." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14115.

Повний текст джерела
Анотація:
A new approach to the process of Directed Evolution is proposed, which utilizes different machine learning algorithms. Directed Evolution is a process of improving a protein for catalytic purposes by introducing random mutations in its sequence to create variants. Through these mutations, Directed Evolution explores the sequence space, which is defined as all the possible sequences for a given number of amino acids. Each variant sequence is divided into one of two classes, positive or negative, according to their activity or stability. By employing machine learning algorithms for feature selection on the sequence of these variants of the protein, attributes or amino acids in its sequence important for the classification into positive or negative, can be identified. Support Vector Machines (SVMs) were utilized to identify the important individual amino acids for any protein, which have to be preserved to maintain its activity. The results for the case of beta-lactamase show that such residues can be identified with high accuracy while using a small number of variant sequences. Another class of machine learning problems, Boolean Learning, was used to extend this approach to identifying interactions between the different amino acids in a proteins sequence using the variant sequences. It was shown through simulations that such interactions can be identified for any protein with a reasonable number of variant sequences. For experimental verification of this approach, two fluorescent proteins, mRFP and DsRed, were used to generate variants, which were screened for fluorescence. Using Boolean Learning, an interacting pair was identified, which was shown to be important for the fluorescence. It was also shown through experiments and simulations that knowing such pairs can increase the fraction active variants in the library. A Boolean Learning algorithm was also developed for this application, which can learn Boolean functions from data in the presence of classification noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Arango, Argoty Gustavo Alonso. "Computational Tools for Annotating Antibiotic Resistance in Metagenomic Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88987.

Повний текст джерела
Анотація:
Metagenomics has become a reliable tool for the analysis of the microbial diversity and the molecular mechanisms carried out by microbial communities. By the use of next generation sequencing, metagenomic studies can generate millions of short sequencing reads that are processed by computational tools. However, with the rapid adoption of metagenomics a large amount of data has been generated. This situation requires the development of computational tools and pipelines to manage the data scalability, accessibility, and performance. In this thesis, several strategies varying from command line, web-based platforms to machine learning have been developed to address these computational challenges. Interpretation of specific information from metagenomic data is especially a challenge for environmental samples as current annotation systems only offer broad classification of microbial diversity and function. Therefore, I developed MetaStorm, a public web-service that facilitates customization of computational analysis for metagenomic data. The identification of antibiotic resistance genes (ARGs) from metagenomic data is carried out by searches against curated databases producing a high rate of false negatives. Thus, I developed DeepARG, a deep learning approach that uses the distribution of sequence alignments to predict over 30 antibiotic resistance categories with a high accuracy. Curation of ARGs is a labor intensive process where errors can be easily propagated. Thus, I developed ARGminer, a web platform dedicated to the annotation and inspection of ARGs by using crowdsourcing. Effective environmental monitoring tools should ideally capture not only ARGs, but also mobile genetic elements and indicators of co-selective forces, such as metal resistance genes. Here, I introduce NanoARG, an online computational resource that takes advantage of the long reads produced by nanopore sequencing technology to provide insights into mobility, co-selection, and pathogenicity. Sequence alignment has been one of the preferred methods for analyzing metagenomic data. However, it is slow and requires high computing resources. Therefore, I developed MetaMLP, a machine learning approach that uses a novel representation of protein sequences to perform classifications over protein functions. The method is accurate, is able to identify a larger number of hits compared to sequence alignments, and is >50 times faster than sequence alignment techniques.
Doctor of Philosophy
Antimicrobial resistance (AMR) is one of the biggest threats to human public health. It has been estimated that the number of deaths caused by AMR will surpass the ones caused by cancer on 2050. The seriousness of these projections requires urgent actions to understand and control the spread of AMR. In the last few years, metagenomics has stand out as a reliable tool for the analysis of the microbial diversity and the AMR. By the use of next generation sequencing, metagenomic studies can generate millions of short sequencing reads that are processed by computational tools. However, with the rapid adoption of metagenomics, a large amount of data has been generated. This situation requires the development of computational tools and pipelines to manage the data scalability, accessibility, and performance. In this thesis, several strategies varying from command line, web-based platforms to machine learning have been developed to address these computational challenges. In particular, by the development of computational pipelines to process metagenomics data in the cloud and distributed systems, the development of machine learning and deep learning tools to ease the computational cost of detecting antibiotic resistance genes in metagenomic data, and the integration of crowdsourcing as a way to curate and validate antibiotic resistance genes.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Schildt, Alexandra, and Jenny Luo. "Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.

Повний текст джерела
Анотація:
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems.
AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Jarvis, Matthew P. "Applying machine learning techniques to rule generation in intelligent tutoring systems." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-112724.

Повний текст джерела
Анотація:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zaccara, Rodrigo Constantin Ctenas. "Anotação e classificação automática de entidades nomeadas em notícias esportivas em Português Brasileiro." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-06092012-135831/.

Повний текст джерела
Анотація:
O objetivo deste trabalho é desenvolver uma plataforma para anotação e classificação automática de entidades nomeadas para notícias escritas em português do Brasil. Para restringir um pouco o escopo do treinamento e análise foram utilizadas notícias esportivas do Campeonato Paulista de 2011 do portal UOL (Universo Online). O primeiro artefato desenvolvido desta plataforma foi a ferramenta WebCorpus. Esta tem como principal intuito facilitar o processo de adição de metainformações a palavras através do uso de uma interface rica web, elaborada para deixar o trabalho ágil e simples. Desta forma as entidades nomeadas das notícias são anotadas e classificadas manualmente. A base de dados foi alimentada pela ferramenta de aquisição e extração de conteúdo desenvolvida também para esta plataforma. O segundo artefato desenvolvido foi o córpus UOLCP2011 (UOL Campeonato Paulista 2011). Este córpus foi anotado e classificado manualmente através do uso da ferramenta WebCorpus utilizando sete tipos de entidades: pessoa, lugar, organização, time, campeonato, estádio e torcida. Para o desenvolvimento do motor de anotação e classificação automática de entidades nomeadas foram utilizadas três diferentes técnicas: maximização de entropia, índices invertidos e métodos de mesclagem das duas técnicas anteriores. Para cada uma destas foram executados três passos: desenvolvimento do algoritmo, treinamento utilizando técnicas de aprendizado de máquina e análise dos melhores resultados.
The main target of this research is to develop an automatic named entity classification tool to sport news written in Brazilian Portuguese. To reduce this scope, during training and analysis only sport news about São Paulo Championship of 2011 written by UOL2 (Universo Online) was used. The first artefact developed was the WebCorpus tool, which aims to make easier the process of add meta informations to words, through a rich web interface. Using this, all the corpora news are tagged manually. The database used by this tool was fed by the crawler tool, also developed during this research. The second artefact developed was the corpora UOLCP2011 (UOL Campeonato Paulista 2011). This corpora was manually tagged using the WebCorpus tool. During this process, seven classification concepts were used: person, place, organization, team, championship, stadium and fans. To develop the automatic named entity classification tool, three different approaches were analysed: maximum entropy, inverted index and merge tecniques using both. Each approach had three steps: algorithm development, training using machine learning tecniques and best score analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії