Tesi sul tema "Machine learnings"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Machine learnings.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Machine learnings".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Algohary, Ahmad. "PROSTATE CANCER RISK STRATIFICATION USING RADIOMICS FOR PATIENTS ON ACTIVE SURVEILLANCE: MULTI-INSTITUTIONAL USE CASES". Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1599231033923829.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Stohr, Daniel Christoph [Verfasser]. "Die beruflichen Anforderungen der Digitalisierung hinsichtlich formaler, physischer und kompetenzspezifischer Aspekte : Eine Analyse von Stellenanzeigen mittels Methoden des Text Minings und Machine Learnings / Daniel Christoph Stohr". Frankfurt a.M. : Peter Lang GmbH, Internationaler Verlag der Wissenschaften, 2019. http://d-nb.info/1185347240/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tebbifakhr, Amirhossein. "Machine Translation For Machines". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/320504.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Traditionally, Machine Translation (MT) systems are developed by targeting fluency (i.e. output grammaticality) and adequacy (i.e. semantic equivalence with the source text) criteria that reflect the needs of human end-users. However, recent advancements in Natural Language Processing (NLP) and the introduction of NLP tools in commercial services have opened new opportunities for MT. A particularly relevant one is related to the application of NLP technologies in low-resource language settings, for which the paucity of training data reduces the possibility to train reliable services. In this specific condition, MT can come into play by enabling the so-called “translation-based” workarounds. The idea is simple: first, input texts in the low-resource language are translated into a resource-rich target language; then, the machine-translated text is processed by well-trained NLP tools in the target language; finally, the output of these downstream components is projected back to the source language. This results in a new scenario, in which the end-user of MT technology is no longer a human but another machine. We hypothesize that current MT training approaches are not the optimal ones for this setting, in which the objective is to maximize the performance of a downstream tool fed with machine-translated text rather than human comprehension. Under this hypothesis, this thesis introduces a new research paradigm, which we named “MT for machines”, addressing a number of questions that raise from this novel view of the MT problem. Are there different quality criteria for humans and machines? What makes a good translation from the machine standpoint? What are the trade-offs between the two notions of quality? How to pursue machine-oriented objectives? How to serve different downstream components with a single MT system? How to exploit knowledge transfer to operate in different language settings with a single MT system? Elaborating on these questions, this thesis: i) introduces a novel and challenging MT paradigm, ii) proposes an effective method based on Reinforcement Learning analysing its possible variants, iii) extends the proposed method to multitask and multilingual settings so as to serve different downstream applications and languages with a single MT system, iv) studies the trade-off between machine-oriented and human-oriented criteria, and v) discusses the successful application of the approach in two real-world scenarios.
4

Dinakar, Karthik. "Lensing Machines : representing perspective in machine learning". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112523.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2017.
Cataloged from PDF version of thesis. Due to the condition of the original material with text runs off the edges of the pages, the reproduction may have unavoidable flaws.
Includes bibliographical references (pages 167-172).
Generative models are venerated as full probabilistic models that randomly generate observable data given a set of latent variables that cannot be directly observed. They can be used to simulate values for variables in the model, allowing analysis by synthesis or model criticism, towards an iterative cycle of model specification, estimation, and critique. However, many datasets represent a combination of several viewpoints - different ways of looking at the same data that leads to various generalizations. For example, a corpus that has data generated by multiple people may be mixtures of several perspectives and can be viewed with different opinions by others. It isn't always possible to represent the viewpoints by clean separation, in advance, of examples representing each perspective and train a separate model for each point of view. In this thesis, we introduce lensing, a mixed-initiative technique to (i) extract lenses or mappings between machine-learned representations and perspectives of human experts, and (2) generate lensed models that afford multiple perspectives of the same dataset. We explore lensing of latent variable model in their configuration, parameter and evidential spaces. We apply lensing to three health applications, namely imbuing the perspectives of experts into latent variable models that analyze adolescent distress and crisis counseling.
by Karthik Dinakar.
Ph. D.
5

Roderus, Jens, Simon Larson e Eric Pihl. "Hadoop scalability evaluation for machine learning algorithms on physical machines : Parallel machine learning on computing clusters". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The amount of available data has allowed the field of machine learning to flourish. But with growing data set sizes comes an increase in algorithm execution times. Cluster computing frameworks provide tools for distributing data and processing power on several computer nodes and allows for algorithms to run in feasible time frames when data sets are large. Different cluster computing frameworks come with different trade-offs. In this thesis, the scalability of the execution time of machine learning algorithms running on the Hadoop cluster computing framework is investigated. A recent version of Hadoop and algorithms relevant in industry machine learning, namely K-means, latent Dirichlet allocation and naive Bayes are used in the experiments. This paper provides valuable information to anyone choosing between different cluster computing frameworks. The results show everything from moderate scalability to no scalability at all. These results indicate that Hadoop as a framework may have serious restrictions in how well tasks are actually parallelized. Possible scalability improvements could be achieved by modifying the machine learning library algorithms or by Hadoop parameter tuning.
6

Kent, W. F. "Machine learning for parameter identification of electric induction machines". Thesis, University of Liverpool, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399178.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis is concerned with the application of simulated evolution (SE) to the steady-state parameter identification problem of a simulated and real 3-phase induction machine, over the no-load direct-on-line start period. In the case of the simulated 3-phase induction machine, the Kron's two-axis dynamic mathematical model was used to generate the real and simulated system responses where the induction machine parameters remain constant over the entire range of slip. The model was used in the actual value as well as the per-unit system, and the parameters were estimated using both the genetic algorithm (GA) and the evolutionary programming (EP) from the machine's dynamic response to a direct-on-line start. Two measurement vectors represented the dynamic responses and all the parameter identification processes were subject to five different levels of measurement noise. For the case of the real 3-phase induction machine, the real system responses were generated by the real 3-phase induction machine whilst the simulated system responses were generated by the Kron's model. However, the real induction machine's parameters are not constant over the range of slip, because of the nonlinearities caused by the skin effect and saturation. Therefore, the parameter identification of a real3-phase induction machine, using EP from the machine's dynamic response to a direct-on-line start, was not possible by applying the same methodology used for estimating the parameters of the simulated, constant parameters, 3-phase induction machine.
7

Thorén, Daniel. "Radar based tank level measurement using machine learning : Agricultural machines". Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176259.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Agriculture is becoming more dependent on computerized solutions to make thefarmer’s job easier. The big step that many companies are working towards is fullyautonomous vehicles that work the fields. To that end, the equipment fitted to saidvehicles must also adapt and become autonomous. Making this equipment autonomoustakes many incremental steps, one of which is developing an accurate and reliable tanklevel measurement system. In this thesis, a system for tank level measurement in a seedplanting machine is evaluated. Traditional systems use load cells to measure the weightof the tank however, these types of systems are expensive to build and cumbersome torepair. They also add a lot of weight to the equipment which increases the fuel consump-tion of the tractor. Thus, this thesis investigates the use of radar sensors together witha number of Machine Learning algorithms. Fourteen radar sensors are fitted to a tankat different positions, data is collected, and a preprocessing method is developed. Then,the data is used to test the following Machine Learning algorithms: Bagged RegressionTrees (BG), Random Forest Regression (RF), Boosted Regression Trees (BRT), LinearRegression (LR), Linear Support Vector Machine (L-SVM), Multi-Layer Perceptron Re-gressor (MLPR). The model with the best 5-fold crossvalidation scores was Random For-est, closely followed by Boosted Regression Trees. A robustness test, using 5 previouslyunseen scenarios, revealed that the Boosted Regression Trees model was the most robust.The radar position analysis showed that 6 sensors together with the MLPR model gavethe best RMSE scores.In conclusion, the models performed well on this type of system which shows thatthey might be a competitive alternative to load cell based systems.
8

Romano, Donato. "Machine Learning algorithms for predictive diagnostics applied to automatic machines". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22319/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In questo lavoro di tesi è stato analizzato l'avvento dell'industria 4.0 all'interno dell' industria nel settore packaging. In particolare, è stata discussa l'importanza della diagnostica predittiva e sono stati analizzati e testati diversi approcci per la determinazione di modelli descrittivi del problema a partire dai dati. Inoltre, sono state applicate le principali tecniche di Machine Learning in modo da classificare i dati analizzati nelle varie classi di appartenenza.
9

Schneider, C. "Using unsupervised machine learning for fault identification in virtual machines". Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/7327.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
10

SOAVE, Elia. "Diagnostics and prognostics of rotating machines through cyclostationary methods and machine learning". Doctoral thesis, Università degli studi di Ferrara, 2022. http://hdl.handle.net/11392/2490999.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the last decades, the vibration analysis has been exploited for monitoring many mechanical systems for industrial applications. Although several works demonstrated how the vibration based diagnostics may reach satisfactory results, the nowadays industrial scenario is deeply changing, driven by the fundamental need of time and cost reduction. In this direction, the academic research has to focus on the improvement of the computational efficiency for the signal processing techniques applied in the mechanical diagnostics field. In the same way, the industrial word requires an increasing attention to the predictive maintenance for estimating the system failure avoiding unnecessary machine downtimes for maintenance operations. In this contest, in the recent years the research activity has been moved to the development of prognostic models for the prediction of the remaining useful life. However, it is important to keep in mind how the two fields are strictly connected, being the diagnostics the base on which build the effectiveness of each prognostic model. On these grounds, this thesis has been focused on these two different but linked areas for the detection and prediction of possible failures inside rotating machines in the industrial framework. The first part of the thesis focuses on the development of a blind deconvolution indicator based on the cyclostationary theory for the fault identification in rotating machines. The novel criterion aims to decrease the computational cost of the blind deconvolution through the exploitation of the Fourier-Bessel series expansion due to its modulated nature more comparable with the fault related vibration pattern. The proposed indicator is extensively compared to the other cyclostationary one based on the classic Fourier transform, taking into account both synthesized and real vibration signals. The comparison proves the improvement given by the proposed criterion in terms of number of operations required by the blind deconvolution algorithm as well as its diagnostic capability also for noisy measured signals. The originality of this part regards the combination of cyclostationarity and Fourier-Bessel transform that leads to the definition of a novel blind deconvolution criterion that keeps the diagnostic effectiveness of cyclostationarity reducing the computational cost in order to meet the industrial requirements. The second part regards the definition of a novel prognostic model from the family of the hidden Markov models constructed on a generalized Gaussian distribution. The target of the proposed method is a better fitting quality of the data distribution in the last damaging phase. In fact, the fault appearance and evolution reflects on a modification of the observation distribution within the states and consequently a generalized density function allows the changes on the distribution form through the values of some model parameters. The proposed method is compared in terms of fitting quality and state sequence prediction to the classic Gaussian based hidden Markov model through the analysis of several run to failure tests performed on rolling element bearings and more complex systems. The novelty of this part regards the definition of a new iterative algorithm for the estimation of the generalized Gaussian model parameters starting from the observations on the physical system for both monovariate and multivariate distributions. Furthermore, the strictly connection between diagnostics and prognostics is demonstrated through the analysis of a not monotonically increasing damaging process proving how the selection of a suitable indicator enables the correct health state estimation.
Negli ultimi decenni, l’analisi vibrazionale è stata sfruttata per il monitoraggio di molti sistemi meccanici per applicazioni industriali. Nonostante molte pubblicazioni abbiano dimostrato come la diagnostica vibrazionale possa raggiungere risultati soddisfacenti, lo scenario industriale odierno è in profondo cambiamento, guidato dalla necessità di ridurre tempi e costi produttivi. In questa direzione, la ricerca deve concentrarsi sul miglioramento dell’efficienza computazionale delle tecniche di analisi del segnale applicate a fini diagnostici. Allo stesso modo, il mondo industriale richiede una sempre maggior attenzione per la manutenzione predittiva, al fine di stimare l’effettivo danneggiamento del sistema evitando così inutili fermi macchina per operazioni manutentive. In tale ambito, negli ultimi anni l’attività di ricerca si sta spostando verso lo sviluppo di modelli prognostici finalizzati alla stima della vita utile residua dei componenti. Tuttavia, è importante ricordare come i due ambiti siano strettamente connessi, essendo la diagnostica la base su cui fondare l’efficacia di ciascun modello prognostico. Su questa base, questa tesi è stata incentrata su queste due diverse, ma tra loro connesse, aree al fine di identificare e predire possibile cause di cedimento su macchine rotanti per applicazioni industriali. La prima parte della tesi è concentrata sullo sviluppo di un nuovo indicatore di blind deconvolution per l’identificazione di difetti su organi rotanti sulla base della teoria ciclostazionaria. Il criterio presentato vuole andare a ridurre il costo computazionale richiesto dalla blind deconvolution tramite l’utilizzo della serie di Fourier-Bessel grazie alla sua natura modulata, maggiormente affine alla tipica firma vibratoria del difetto. L’indicatore proposto viene accuratamente confrontato con il suo analogo basato sulla classica serie di Fourier considerando sia segnali simulati che segnali di vibrazione reali. Il confronto vuole dimostrare il miglioramento fornito dal nuovo criterio in termini sia di minor numero di operazioni richieste dall’algoritmo che di efficacia diagnostica anche in condizioni di segnale molto rumoroso. Il contributo innovativo di questa parte riguarda la combinazione di ciclostazionarietà e serie di Furier-Bessel che porta alla definizione di un nuovo criterio di blind deconvolution in grado di mantenere l’efficacia diagnostica della ciclostazionarietà ma con un minor tempo computazionale per venire incontro alle richieste del mondo industriale. La second parte riguarda la definizione di un nuovo modello prognostico, appartenente alla famiglia degli hidden Markov models, costruito partendo da una distribuzione Gaussiana generalizzata. L’obbiettivo del metodo proposto è una miglior riproduzione della reale distribuzione dei dati, in particolar modo negli ultimi stadi del danneggiamento. Infatti, la comparsa e l’evoluzione del difetto comporta una modifica della distribuzione delle osservazioni fra i diversi stati. Di conseguenza, una densità di probabilità generalizzata permette la modificazione della forma della distribuzione tramite diversi valori dei parametri del modello. Il metodo proposto viene confrontato con il classico hidden Markov model di base Gaussiana in termini di qualità di riproduzione della distribuzione e predizione della sequenza di stati tramite l’analisi di alcuni test di rottura su cuscinetti volventi e sistemi complessi. L’innovatività di questa parte è data dalla definizione di un algoritmo iterativo per la stima dei parametri del modello nell’ipotesi di distribuzione Gaussiana generalizzata, sia nel caso monovariato che multivariato, partendo dalle osservazioni sul sistema fisico in esame.
11

Andersson, Viktor. "Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning Algorithms". Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64721.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Data Ductus is a Swedish IT-consultant company, their customer base ranging from small startups to large scale cooperations. The company has steadily grown since the 80s and has established offices in both Sweden and the US. With the help of machine learning, this project will present a possible solution to the errors caused by the human factor in the logistic business.A way of preprocessing data before applying it to a machine learning algorithm, as well as a couple of algorithms to use will be presented.
Data Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
12

Angola, Enrique. "Novelty Detection Of Machinery Using A Non-Parametric Machine Learning Approach". ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/923.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A novelty detection algorithm inspired by human audio pattern recognition is conceptualized and experimentally tested. This anomaly detection technique can be used to monitor the health of a machine or could also be coupled with a current state of the art system to enhance its fault detection capabilities. Time-domain data obtained from a microphone is processed by applying a short-time FFT, which returns time-frequency patterns. Such patterns are fed to a machine learning algorithm, which is designed to detect novel signals and identify windows in the frequency domain where such novelties occur. The algorithm presented in this paper uses one-dimensional kernel density estimation for different frequency bins. This process eliminates the need for data dimension reduction algorithms. The method of "pseudo-likelihood cross validation" is used to find an independent optimal kernel bandwidth for each frequency bin. Metrics such as the "Individual Node Relative Difference" and "Total Novelty Score" are presented in this work, and used to assess the degree of novelty of a new signal. Experimental datasets containing synthetic and real novelties are used to illustrate and test the novelty detection algorithm. Novelties are successfully detected in all experiments. The presented novelty detection technique could greatly enhance the performance of current state-of-the art condition monitoring systems, or could also be used as a stand-alone system.
13

Gustafsson, Robin, e Lucas Fröjdendahl. "Machine Learning for Traffic Control of Unmanned Mining Machines : Using the Q-learning and SARSA algorithms". Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260285.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Manual configuration of rules for unmanned mining machine traffic control can be time-consuming and therefore expensive. This paper presents a Machine Learning approach for automatic configuration of rules for traffic control in mines with autonomous mining machines by using Q-learning and SARSA. The results show that automation might be able to cut the time taken to configure traffic rules from 1-2 weeks to a maximum of approximately 6 hours which would decrease the cost of deployment. Tests show that in the worst case the developed solution is able to run continuously for 24 hours 82% of the time compared to the 100% accuracy of the manual configuration. The conclusion is that machine learning can plausibly be used for the automatic configuration of traffic rules. Further work in increasing the accuracy to 100% is needed for it to replace manual configuration. It remains to be examined whether the conclusion retains pertinence in more complex environments with larger layouts and more machines.
Manuell konfigurering av trafikkontroll för obemannade gruvmaskiner kan vara en tidskrävande process. Om denna konfigurering skulle kunna automatiseras så skulle det gynnas tidsmässigt och ekonomiskt. Denna rapport presenterar en lösning med maskininlärning med Q-learning och SARSA som tillvägagångssätt. Resultaten visar på att konfigureringstiden möjligtvis kan tas ned från 1–2 veckor till i värsta fallet 6 timmar vilket skulle minska kostnaden för produktionssättning. Tester visade att den slutgiltiga lösningen kunde köra kontinuerligt i 24 timmar med minst 82% träffsäkerhet jämfört med 100% då den manuella konfigurationen används. Slutsatsen är att maskininlärning eventuellt kan användas för automatisk konfiguration av trafikkontroll. Vidare arbete krävs för att höja träffsäkerheten till 100% så att det kan användas istället för manuell konfiguration. Fler studier bör göras för att se om detta även är sant och applicerbart för mer komplexa scenarier med större gruvlayouts och fler maskiner.
14

Tarsa, Stephen J. "Machine Learning for Machines: Data-Driven Performance Tuning at Runtime Using Sparse Coding". Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:14226074.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We develop methods for adjusting device configurations to runtime conditions based on system-state predictions. Our approach statistically models performance data collected by either actively probing conditions such as wireless link quality, or leveraging existing infrastructure such as hardware performance counters. By predicting future runtime characteristics, we enable on-the-fly changes to wireless transmission schedule, voltage and frequency in circuits, and data placement in storage systems. In highly-variable everyday use-cases, we demonstrate large performance gains not by designing new protocols or system configurations, but by more-judiciously using those that exist. This thesis presents a state-modeling framework based on sparse feature represen- tation. It is applied in diverse application scenarios to data representing: 1. Packet loss over diverse wireless links 2. Circuit performance counters collected during user-driven workloads 3. Access pattern statistics measured from data- center storage systems Our framework uses unsupervised clustering to discover latent statistical structure in large datasets. We exploit this stable structure to reduce overfitting in supervised learning models like Support Vector Machine (SVM) classifiers and Classification and Regression Trees (CART) trained on small datasets. As a result, we can capture transient predictive statistics that change based on wireless environment, circuit workload, and storage application. Given the magnitude of performance improvements and the potential economic opportunity, we hope that this work becomes the foundation for a broad investigation into on-platform data-driven device optimization, dubbed Machine Learning for Machines (MLM).
15

Borngrund, Carl. "Machine vision for automation of earth-moving machines : Transfer learning experiments with YOLOv3". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75169.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This master thesis investigates the possibility to create a machine vision solution for the automation of earth-moving machines. This research was done as without some type of vision system it will not be possible to create a fully autonomous earth moving machine that can safely be used around humans or other machines. Cameras were used as the primary sensors as they are cheap, provide high resolution and is the type of sensor that most closely mimic the human vision system. The purpose of this master thesis was to use existing real time object detectors together with transfer learning and examine if they can successfully be used to extract information in environments such as construction, forestry and mining. The amount of data needed to successfully train a real time object detector was also investigated. Furthermore, the thesis examines if there are specifically difficult situations for the defined object detector, how reliable the object detector is and finally how to use service-oriented architecture principles can be used to create deep learning systems. To investigate the questions formulated above, three data sets were created where different properties were varied. These properties were light conditions, ground material and dump truck orientation. The data sets were created using a toy dump truck together with a similarly sized wheel loader with a camera mounted on the roof of its cab. The first data set contained only indoor images where the dump truck was placed in different orientations but neither the light nor the ground material changed. The second data set contained images were the light source was kept constant, but the dump truck orientation and ground materials changed. The last data set contained images where all property were varied. The real time object detector YOLOv3 was used to examine how a real time object detector would perform depending on which one of the three data sets it was trained using. No matter the data set, it was possible to train a model to perform real time object detection. Using a Nvidia 980 TI the inference time of the model was around 22 ms, which is more than enough to be able to classify videos running at 30 fps. All three data sets converged to a training loss of around 0.10. The data set which contained more varied data, such as the data set where all properties were changed, performed considerably better reaching a validation loss of 0.164 compared to the indoor data set, containing the least varied data, only reached a validation loss of 0.257. The size of the data set was also a factor in the performance, however it was not as important as having varied data. The result also showed that all three data sets could reach a mAP score of around 0.98 using transfer learning.
16

MINERVINI, MARCELLO. "Multi-sensor analysis and machine learning classification approach for diagnostics of electrical machines". Doctoral thesis, Università degli studi di Pavia, 2022. http://hdl.handle.net/11571/1464785.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Liu, Yi. "Studies on support vector machines and applications to video object extraction". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1158588434.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Stiernborg, Sebastian, e Sara Ervik. "Evaluation of Machine Learning Classification Methods : Support Vector Machines, Nearest Neighbour and Decision Tree". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With more and more data available, the interest and use for machine learning is growing and so does the need for classification. Classifica- tion is an important method within machine learning for data simpli- fication and prediction. This report evaluates three classification methods for supervised learn- ing: Support Vector Machines (SVM) with several kernels, Nearest Neighbor (k-NN) and Decision Tree (DT). The methods were evalu- ated based on the factors accuracy, precision, recall and time. The experiments were conducted on artificial data created to represent a variation of distributions with a limitation of only 2 features and 3 classes. Different distributions of data were chosen to challenge each classification method. The results show that the measurements for ac- curacy and time vary considerably for the different distributed dataset. SVM with RBF kernel performed better for accuracy in comparison to the other classification methods. k-NN scored slightly lower accuracy values than SVM with RBF kernel in general, but performed better on the challenging dataset. DT is the less time consuming algorithm and was significally faster than the other classification methods. The only method that could compete with DT on time was k-NN that was faster than DT for the dataset with small spread and coinciding classes. Although a clear trend can be seen in the results the area needs to be studied further to draw a comprehensive conclusion due to limitation of the artificially generated datasets in this study.
Med växande data och tillgänglighet ökar intresset och användning- en för maskininlärning, tillsammans med behovet för klassificering. Klassificering är en viktig metod inom maskininlärning för att förenk- la data och göra förutstägelser. Denna rapport utvärderar tre klassificeringsmetoder för övervakad in- lärning: Stödvektormaskiner (SVM) med olika kärnor, Närmaste Gran- ne (k-NN) och Beslutsträd (DT). Metoderna utvärderades baserat på nogrannhet, precision, återkallelse och tid. Experimenten utfördes på artificiell data skapad för att representera en variation av fördelningar med en begränsning av endast 2 egenskaper och 3 klasser. Resultaten visar att mätningarna för noggrannhet och tid varierar avsevärt för olika variationer av dataset. SVM med RBF-kärna gav generellt högre värden för noggrannhet i jämförelse med de and- ra klassificeringsmetoderna. k-NN visade något lägre noggrannhet än SVM med RBF-kärna i allmänhet, men presterade bättre på det mest utmanande datasetet. DT är den minst tidskrävande algoritmen och var signifikant snabbare än de andra klassificeringsmetoderna. Den enda metoden som kunde konkurrera med DT i tid var SVM med k- NN som var snabbare än DT för det dataset som hade liten spridning och sammanfallande klasser. Även om en tydlig trend kan ses i resultaten behöver området studeras ytterligare för att dra en omfattande slutsats på grund av begränsning av dataset i denna studie.
19

Collazo, Santiago Bryan Omar. "Machine learning blocks". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100301.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references.
This work presents MLBlocks, a machine learning system that lets data scientists explore the space of modeling techniques in a very easy and efficient manner. We show how the system is very general in the sense that virtually any problem and dataset can be casted to use MLBlocks, and how it supports the exploration of Discriminative Modeling, Generative Modeling and the use of synthetic features to boost performance. MLBlocks is highly parameterizable, and some of its powerful features include the ease of formulating lead and lag experiments for time series data, its simple interface for automation, and its extensibility to additional modeling techniques. We show how we used MLBlocks to quickly get results for two very different realworld data science problems. In the first, we used time series data from Massive Open Online Courses to cast many lead and lag formulations of predicting student dropout. In the second, we used MLBlocks' Discriminative Modeling functionality to find the best-performing model for predicting the destination of a car given its past trajectories. This later functionality is self-optimizing and will find the best model by exploring a space of 11 classification algorithms with a combination of Multi-Armed Bandit strategies and Gaussian Process optimizations, all in a distributed fashion in the cloud.
by Bryan Omar Collazo Santiago.
M. Eng.
20

Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nella presente tesi di laurea viene preso in considerazione l’algoritmo di classificazione Support Vector Machine. Piu` in particolare si considera la sua formulazione come problema di ottimizazione Mixed Integer Program per la classificazione binaria super- visionata di un set di dati.
21

Shukla, Ritesh. "Machine learning ecosystem : implications for business strategy centered on machine learning". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/107342.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 48-50).
As interest for adopting machine learning as a core component of a business strategy increases, business owners face the challenge of integrating an uncertain and rapidly evolving technology into their organization, and depending on this for the success of their strategy. The field of Machine learning has a rich set of literature for modeling of technical systems that implement machine learning. This thesis attempts to connect the literature for business and technology and for evolution and adoption of technology to the emergent properties of machine learning systems. This thesis provides high-level levers and frameworks to better prepare business owners to adopt machine learning to satisfy their strategic goals.
by Ritesh Shukla.
S.M. in Engineering and Management
22

Huembeli, Patrick. "Machine learning for quantum physics and quantum physics for machine learning". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672085.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Research at the intersection of machine learning (ML) and quantum physics is a recent growing field due to the enormous expectations and the success of both fields. ML is arguably one of the most promising technologies that has and will continue to disrupt many aspects of our lives. The way we do research is almost certainly no exception and ML, with its unprecedented ability to find hidden patterns in data, will be assisting future scientific discoveries. Quantum physics on the other side, even though it is sometimes not entirely intuitive, is one of the most successful physical theories and we are on the verge of adopting some quantum technologies in our daily life. Quantum many-body physics is a subfield of quantum physics where we study the collective behavior of particles or atoms and the emergence of phenomena that are due to this collective behavior, such as phases of matter. The study of phase transitions of these systems often requires some intuition of how we can quantify the order parameter of a phase. ML algorithms can imitate something similar to intuition by inferring knowledge from example data. They can, therefore, discover patterns that are invisible to the human eye, which makes them excellent candidates to study phase transitions. At the same time, quantum devices are known to be able to perform some computational task exponentially faster than classical computers and they are able to produce data patterns that are hard to simulate on classical computers. Therefore, there is the hope that ML algorithms run on quantum devices show an advantage over their classical analog. This thesis is devoted to study two different paths along the front lines of ML and quantum physics. On one side, we study the use of neural networks (NN) to classify phases of mater in many-body quantum systems. On the other side, we study ML algorithms that run on quantum computers. The connection between ML for quantum physics and quantum physics for ML in this thesis is an emerging subfield in ML, the interpretability of learning algorithms. A crucial ingredient in the study of phase transitions with NNs is a better understanding of the predictions of the NN, to eventually infer a model of the quantum system and interpretability can assist us in this endeavor. The interpretability method that we study analyzes the influence of the training points on a test prediction and it depends on the curvature of the NN loss landscape. This further inspired an in-depth study of the loss of quantum machine learning (QML) applications which we as well will discuss. In this thesis, we give answers to the questions of how we can leverage NNs to classify phases of matter and we use a method that allows to do domain adaptation to transfer the learned "intuition" from systems without noise onto systems with noise. To map the phase diagram of quantum many-body systems in a fully unsupervised manner, we study a method known from anomaly detection that allows us to reduce the human input to a mini mum. We will as well use interpretability methods to study NNs that are trained to distinguish phases of matter to understand if the NNs are learning something similar to an order parameter and if their way of learning can be made more accessible to humans. And finally, inspired by the interpretability of classical NNs, we develop tools to study the loss landscapes of variational quantum circuits to identify possible differences between classical and quantum ML algorithms that might be leveraged for a quantum advantage.
La investigación en la intersección del aprendizaje automático (machine learning, ML) y la física cuántica es una área en crecimiento reciente debido al éxito y las enormes expectativas de ambas áreas. ML es posiblemente una de las tecnologías más prometedoras que ha alterado y seguirá alterando muchos aspectos de nuestras vidas. Es casi seguro que la forma en que investigamos no es una excepción y el ML, con su capacidad sin precedentes para encontrar patrones ocultos en los datos ayudará a futuros descubrimientos científicos. La física cuántica, por otro lado, aunque a veces no es del todo intuitiva, es una de las teorías físicas más exitosas, y además estamos a punto de adoptar algunas tecnologías cuánticas en nuestra vida diaria. La física cuántica de los muchos cuerpos (many-body) es una subárea de la física cuántica donde estudiamos el comportamiento colectivo de partículas o átomos y la aparición de fenómenos que se deben a este comportamiento colectivo, como las fases de la materia. El estudio de las transiciones de fase de estos sistemas a menudo requiere cierta intuición de cómo podemos cuantificar el parámetro de orden de una fase. Los algoritmos de ML pueden imitar algo similar a la intuición al inferir conocimientos a partir de datos de ejemplo. Por lo tanto, pueden descubrir patrones que son invisibles para el ojo humano, lo que los convierte en excelentes candidatos para estudiar las transiciones de fase. Al mismo tiempo, se sabe que los dispositivos cuánticos pueden realizar algunas tareas computacionales exponencialmente más rápido que los ordenadores clásicos y pueden producir patrones de datos que son difíciles de simular en los ordenadores clásicos. Por lo tanto, existe la esperanza de que los algoritmos ML que se ejecutan en dispositivos cuánticos muestren una ventaja sobre su analógico clásico. Estudiamos dos caminos diferentes a lo largo de la vanguardia del ML y la física cuántica. Por un lado, estudiamos el uso de redes neuronales (neural network, NN) para clasificar las fases de la materia en sistemas cuánticos de muchos cuerpos. Por otro lado, estudiamos los algoritmos ML que se ejecutan en ordenadores cuánticos. La conexión entre ML para la física cuántica y la física cuántica para ML en esta tesis es un subárea emergente en ML: la interpretabilidad de los algoritmos de aprendizaje. Un ingrediente crucial en el estudio de las transiciones de fase con NN es una mejor comprensión de las predicciones de la NN, para inferir un modelo del sistema cuántico. Así pues, la interpretabilidad de la NN puede ayudarnos en este esfuerzo. El estudio de la interpretabilitad inspiró además un estudio en profundidad de la pérdida de aplicaciones de aprendizaje automático cuántico (quantum machine learning, QML) que también discutiremos. En esta tesis damos respuesta a las preguntas de cómo podemos aprovechar las NN para clasificar las fases de la materia y utilizamos un método que permite hacer una adaptación de dominio para transferir la "intuición" aprendida de sistemas sin ruido a sistemas con ruido. Para mapear el diagrama de fase de los sistemas cuánticos de muchos cuerpos de una manera totalmente no supervisada, estudiamos un método conocido de detección de anomalías que nos permite reducir la entrada humana al mínimo. También usaremos métodos de interpretabilidad para estudiar las NN que están entrenadas para distinguir fases de la materia para comprender si las NN están aprendiendo algo similar a un parámetro de orden y si su forma de aprendizaje puede ser más accesible para los humanos. Y finalmente, inspirados por la interpretabilidad de las NN clásicas, desarrollamos herramientas para estudiar los paisajes de pérdida de los circuitos cuánticos variacionales para identificar posibles diferencias entre los algoritmos ML clásicos y cuánticos que podrían aprovecharse para obtener una ventaja cuántica.
23

Menke, Joshua E. "Improving machine learning through oracle learning /". Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1726.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Menke, Joshua Ephraim. "Improving Machine Learning Through Oracle Learning". BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/843.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The following dissertation presents a new paradigm for improving the training of machine learning algorithms, oracle learning. The main idea in oracle learning is that instead of training directly on a set of data, a learning model is trained to approximate a given oracle's behavior on a set of data. This can be beneficial in situations where it is easier to obtain an oracle than it is to use it at application time. It is shown that oracle learning can be applied to more effectively reduce the size of artificial neural networks, to more efficiently take advantage of domain experts by approximating them, and to adapt a problem more effectively to a machine learning algorithm.
25

Papakyriakopoulos, Orestis [Verfasser], Simon [Akademischer Betreuer] Hegelich, Jürgen [Gutachter] Pfeffer e Simon [Gutachter] Hegelich. "Political Machines: Machine learning for understanding the politics of social machines / Orestis Papakyriakopoulos ; Gutachter: Jürgen Pfeffer, Simon Hegelich ; Betreuer: Simon Hegelich". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/121302627X/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Craddock, Richard Cameron. "Support vector classification analysis of resting state functional connectivity fMRI". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31774.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Hu, Xiaoping; Committee Co-Chair: Vachtsevanos, George; Committee Member: Butera, Robert; Committee Member: Gurbaxani, Brian; Committee Member: Mayberg, Helen; Committee Member: Yezzi, Anthony. Part of the SMARTech Electronic Thesis and Dissertation Collection.
27

Mauricio, Palacio Sebastián. "Machine-Learning Applied Methods". Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/669286.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The presented discourse followed several topics where every new chapter introduced an economic prediction problem and showed how traditional approaches can be complemented with new techniques like machine learning and deep learning. These powerful tools combined with principles of economic theory is highly increasing the scope for empiricists. Chapter 3 addressed this discussion. By progressively moving from Ordinary Least Squares, Penalized Linear Regressions and Binary Trees to advanced ensemble trees. Results showed that ML algorithms significantly outperform statistical models in terms of predictive accuracy. Specifically, ML models perform 49-100\% better than unbiased methods. However, we cannot rely on parameter estimations. For example, Chapter 4 introduced a net prediction problem regarding fraudulent property claims in insurance. Despite the fact that we got extraordinary results in terms of predictive power, the complexity of the problem restricted us from getting behavioral insight. Contrarily, statistical models are easily interpretable. Coefficients give us the sign, the magnitude and the statistical significance. We can learn behavior from marginal impacts and elasticities. Chapter 5 analyzed another prediction problem in the insurance market, particularly, how the combination of self-reported data and risk categorization could improve the detection of risky potential customers in insurance markets. Results were also quite impressive in terms of prediction, but again, we did not know anything about the direction or the magnitude of the features. However, by using a Probit model, we showed the benefits of combining statistic models with ML-DL models. The Probit model let us get generalizable insights on what type of customers are likely to misreport, enhancing our results. Likewise, Chapter 2 is a clear example of how causal inference can benefit from ML and DL methods. These techniques allowed us to capture that 70 days before each auction there were abnormal behaviors in daily prices. By doing so, we could apply a solid statistical model and we could estimate precisely what the net effect of the mandated auctions in Spain was. This thesis aims at combining advantages of both methodologies, machine learning and econometrics, boosting their strengths and attenuating their weaknesses. Thus, we used ML and statistical methods side by side, exploring predictive performance and interpretability. Several conditions can be inferred from the nature of both approaches. First, as we have observed throughout the chapters, ML and traditional econometric approaches solve fundamentally different problems. We use ML and DL techniques to predict, not in terms of traditional forecast, but making our models generalizable to unseen data. On the other hand, traditional econometrics has been focused on causal inference and parameter estimation. Therefore, ML is not replacing traditional techniques, but rather complementing them. Second, ML methods focus in out-of-sample data instead of in-sample data, while statistical models typically focus on goodness-of-fit. It is then not surprising that ML techniques consistently outperformed traditional techniques in terms of predictive accuracy. The cost is then biased estimators. Third, the tradition in economics has been to choose a unique model based on theoretical principles and to fit the full dataset on it and, in consequence, obtaining unbiased estimators and their respective confidence intervals. On the other hand, ML relies on data driven selection models, and does not consider causal inference. Instead of manually choosing the covariates, the functional form is determined by the data. This also translates to the main weakness of ML, which is the lack of inference of the underlying data-generating process. I.e. we cannot derive economically meaningful conclusions from the coefficients. Focusing on out-of-sample performance comes at the expense of the ability to infer causal effects, due to the lack of standard errors on the coefficients. Therefore, predictors are typically biased, and estimators may not be normally distributed. Thus, we can conclude that in terms of out-sample performance it is hard to compete against ML models. However, ML cannot contend with the powerful insights that the causal inference analysis gives us, which allow us not only to get the most important variables and their magnitude but also the ability to understand economic behaviors.
28

Pace, Aaron J. "Guided Interactive Machine Learning". Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1355.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Montanez, George D. "Why Machine Learning Works". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To better understand why machine learning works, we cast learning problems as searches and characterize what makes searches successful. We prove that any search algorithm can only perform well on a narrow subset of problems, and show the effects of dependence on raising the probability of success for searches. We examine two popular ways of understanding what makes machine learning work, empirical risk minimization and compression, and show how they fit within our search frame-work. Leveraging the “dependence-first” view of learning, we apply this knowledge to areas of unsupervised time-series segmentation and automated hyperparameter optimization, developing new algorithms with strong empirical performance on real-world problem classes.
30

Thomaz, Andrea Lockerd. "Socially guided machine learning". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36160.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (p. 139-146).
Social interaction will be key to enabling robots and machines in general to learn new tasks from ordinary people (not experts in robotics or machine learning). Everyday people who need to teach their machines new things will find it natural for to rely on their interpersonal interaction skills. This thesis provides several contributions towards the understanding of this Socially Guided Machine Learning scenario. While the topic of human input to machine learning algorithms has been explored to some extent, prior works have not gone far enough to understand what people will try to communicate when teaching a machine and how algorithms and learning systems can be modified to better accommodate a human partner. Interface techniques have been based on intuition and assumptions rather than grounded in human behavior, and often techniques are not demonstrated or evaluated with everyday people. Using a computer game, Sophie's Kitchen, an experiment with human subjects provides several insights about how people approach the task of teaching a machine. In particular, people want to direct and guide an agent's exploration process, they quickly use the behavior of the agent to infer a mental model of the learning process, and they utilize positive and negative feedback in asymmetric ways.
(cont.) Using a robotic platform, Leonardo, and 200 people in follow-up studies of modified versions of the Sophie's Kitchen game, four research themes are developed. The use of human guidance in a machine learning exploration can be successfully incorporated to improve learning performance. Novel learning approaches demonstrate aspects of goal-oriented learning. The transparency of the machine learner can have significant effects on the nature of the instruction received from the human teacher, which in turn positively impacts the learning process. Utilizing asymmetric interpretations of positive and negative feedback from a human partner, can result in a more efficient and robust learning experience.
by Andrea Lockerd Thomaz.
Ph.D.
31

Leather, Hugh. "Machine learning in compilers". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/9810.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Tuning a compiler so that it produces optimised code is a difficult task because modern processors are complicated; they have a large number of components operating in parallel and each is sensitive to the behaviour of the others. Building analytical models on which optimisation heuristics can be based has become harder as processor complexity increased and this trend is bound to continue as the world moves towards further heterogeneous parallelism. Compiler writers need to spend months to get a heuristic right for any particular architecture and these days compilers often support a wide range of disparate devices. Whenever a new processor comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned for it. This is, typically, too much effort and so, in fact, most compilers are out of date. Machine learning has been shown to help; by running example programs, compiled in different ways, and observing how those ways effect program run-time, automatic machine learning tools can predict good settings with which to compile new, as yet unseen programs. The field is nascent, but has demonstrated significant results already and promises a day when compilers will be tuned for new hardware without the need for months of compiler experts’ time. Many hurdles still remain, however, and while experts no longer have to worry about the details of heuristic parameters, they must spend their time on the details of the machine learning process instead to get the full benefits of the approach. This thesis aims to remove some of the aspects of machine learning based compilers for which human experts are still required, paving the way for a completely automatic, retuning compiler. First, we tackle the most conspicuous area of human involvement; feature generation. In all previous machine learning works for compilers, the features, which describe the important aspects of each example to the machine learning tools, must be constructed by an expert. Should that expert choose features poorly, they will miss crucial information without which the machine learning algorithm can never excel. We show that not only can we automatically derive good features, but that these features out perform those of human experts. We demonstrate our approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the available performance, more than the XXX% of previous state of the art. Next, we demonstrate a new method to efficiently capture the raw data needed for machine learning tasks. The iterative compilation on which machine learning in compilers depends is typically time consuming, often requiring months of compute time. The underlying processes are also noisy, so that most prior works fall into two categories; those which attempt to gather clean data by executing a large number of times and those which ignore the statistical validity of their data to keep experiment times feasible. Our approach, on the other hand guarantees clean data while adapting to the experiment at hand, needing an order of magnitude less work that prior techniques.
32

Armani, Luca. "Machine Learning: Customer Segmentation". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24925/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Con lo scopo di risparmiare capitale e incrementare i profitti tramite attività di marketing sempre più mirate, conoscere le preferenze di un cliente e supportarlo nell’acquisto, sta passando dall’essere una scelta all’essere una necessità. A tal proposito, le aziende si stanno muovendo verso un approccio sempre più automatizzato per riuscire a classificare la clientela, cos`ı da ottimizzare sempre più l’esperienza d’acquisto. Tramite il Machine Learning è possibile effettuare svariati tipi di analisi che consentano di raggiungere questo scopo. L’obiettivo di questo progetto è, in prima fase, di dare una panoramica al lettore su quali siano le tecniche e gli strumenti che mette a disposizione il ML. In un secondo momento verrà descritto il problema della Customer Segmentation e quali tecniche e benefici porta con sé questo tema. Per finire, verranno descritte le varie fasi su cui si fonda il seguente progetto di ML rivolto alla classificazione della clientela, basandosi sul totale di spesa effettuata in termini monetari e la quantità di articoli acquistati.
33

Du, Buisson Lise. "Machine learning in astronomy". Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/15502.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The search to find answers to the deepest questions we have about the Universe has fueled the collection of data for ever larger volumes of our cosmos. The field of supernova cosmology, for example, is seeing continuous development with upcoming surveys set to produce a vast amount of data that will require new statistical inference and machine learning techniques for processing and analysis. Distinguishing between real objects and artefacts is one of the first steps in any transient science pipeline and, currently, is still carried out by humans - often leading to hand scanners having to sort hundreds or thousands of images per night. This is a time-consuming activity introducing human biases that are extremely hard to characterise. To succeed in the objectives of future transient surveys, the successful substitution of human hand scanners with machine learning techniques for the purpose of this artefact-transient classification therefore represents a vital frontier. In this thesis we test various machine learning algorithms and show that many of them can match the human hand scanner performance in classifying transient difference g, r and i-band imaging data from the SDSS-II SN Survey into real objects and artefacts. Using principal component analysis and linear discriminant analysis, we construct a grand total of 56 feature sets with which to train, optimise and test a Minimum Error Classifier (MEC), a naive Bayes classifier, a k-Nearest Neighbours (kNN) algorithm, a Support Vector Machine (SVM) and the SkyNet artificial neural network.
34

Punugu, Venkatapavani Pallavi. "Machine Learning in Neuroimaging". Thesis, State University of New York at Buffalo, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10284048.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

The application of machine learning algorithms to analyze and determine disease related patterns in neuroimaging has emerged to be of extreme interest in Computer-Aided Diagnosis (CAD). This study is a small step towards categorizing Alzheimer's disease, Neurode-generative diseases, Psychiatric diseases and Cerebrovascular Small Vessel diseases using CAD. In this study, the SPECT neuroimages are pre-processed using powerful data reduction techniques such as Singular Value Decomposition (SVD), Independent Component Analysis (ICA) and Automated Anatomical Labeling (AAL). Each of the pre-processing methods is used in three machine learning algorithms namely: Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and k-Nearest Neighbors (k-nn) to recognize disease patterns and classify the diseases. While neurodegenerative diseases and psychiatric diseases overlap with a mix of diseases and resulted in fairly moderate classification, the classification between Alzheimer's disease and Cerebrovascular Small Vessel diseases yielded good results with an accuracy of up to 73.7%.

35

Lounici, Sofiane. "Watermarking machine learning models". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La protection de la propriété intellectuelle des modèles d’apprentissage automatique apparaît de plus en plus nécessaire, au vu des investissements et de leur impact sur la société. Dans cette thèse, nous proposons d’étudier le tatouage de modèles d’apprentissage automatique. Nous fournissons un état de l’art sur les techniques de tatouage actuelles, puis nous le complétons en considérant le tatouage de modèles au-delà des tâches de classification d’images. Nous définissons ensuite les attaques de contrefaçon contre le tatouage pour les plateformes d’hébergement de modèles, et nous présentons une nouvelle technique de tatouages par biais algorithmique. De plus, nous proposons une implémentation des techniques présentées
The protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
36

Tiensuu, Jacob, Maja Linderholm, Sofia Dreborg e Fredrik Örn. "Detecting exoplanets with machine learning : A comparative study between convolutional neural networks and support vector machines". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385690.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this project two machine learning methods, Support Vector Machine, SVM, and Convolutional Neural Network, CNN, are studied to determine which method performs best on a labeled data set containing time series of light intensity from extrasolar stars. The main difficulty is that in the data set there are a lot more non exoplanet stars than there are stars with orbiting exoplanets. This is causing a so called imbalanced data set which in this case is improved by i.e. mirroring the curves of stars with an orbiting exoplanet and adding them to the set. Trying to improve the results further, some preprocessing is done before implementing the methods on the data set. For the SVM, feature extraction and fourier transform of the time-series are important measures but further preprocessing alternatives are investigated. For the CNN-method the time-series are both detrended and smoothed, giving two inputs for the same light curve. All code is implemented in python. Of all the validation parameters recall is considered the main priority since it is more important to find all exoplanets than finding all non exoplanets. CNN turned out to be the best performing method for the chosen configurations with 1.000 in recall which exceeds SVM’s recall 0.800. Considering the second validation parameter precision CNN is also the best performing method with a precision of 0.769 over SVM's 0.571.
37

Delnevo, Giovanni <1991&gt. "On the implications of big data and machine learning in the interplay between humans and machines". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10036/1/phd_thesis_delnevo_giovanni.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Big data and machine learning are profoundly shaping social, economic, and political spheres, becoming part of the collective imagination. In recent years, barriers have fallen and a wide range of products, services, and resources, that exploit Artificial Intelligence, have emerged. Hence, it becomes of fundamental importance to understand the limits and, consequently, the potentialities of predictions made by a machine that learns directly from data. Understanding the limits of machine predictions would allow dispelling false beliefs about the potentialities of machine learning algorithms, avoiding at the same time possible misuses. To tackle this problem, completely different research lines are emerging, that focus on different aspects. In this thesis, we study how the presence of big data and artificial intelligence influences the interaction between humans and computers. Such a study should produce some high-level reflections that can contribute to the framing of how the interaction between humans and computers has changed, since the presence of big data and algorithms that can make computers somehow intelligent, albeit with some limitations. In the different chapters of the thesis, various case studies that we faced during the Ph.D. are described, chosen specifically for their peculiar characteristics. Starting from the obtained results, we provide several high-level reflections on the implications of the interaction between humans and machines.
38

Mwamsojo, Nickson. "Neuromorphic photonic systems for information processing". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Par une utilisation performante de nombreux algorithmes dont les réseaux neuronaux, l'intelligence artificielle révolutionne le développement de la société numérique. Néanmoins, la tendance actuelle dépasse les limites prédites par la loi de Moore et celle de Koomey, ce qui implique des limitations éventuelles des implémentations numériques de ces systèmes. Pour répondre plus efficacement aux besoins calculatoires spécifiques de cette révolution, des systèmes physiques innovants tentent en amont d'apporter des solutions, nommées "neuro-morphiques" puisqu'elles imitent le fonctionnement des cerveaux biologiques. Les systèmes existants sont basés sur des techniques dites de "Reservoir Computing" ou "coherent Ising Machine." Leurs versions photoniques, ont permis de démontrer l'intérêt de ces techniques notamment pour la reconnaissance vocale avec un état de l'art en 2017 attestant de bonnes performances en termes de reconnaissance à un rythme d'1 million de mots par seconde. Nous proposons dans un premier temps une technique d'ajustement automatique des hyperparamètres pour le "Reservoir Computing", accompagnée d'une étude théorique de convergence. Nous proposons ensuite une solution au problème de la détection précoce de la maladie d'Alzheimer de type "Reservoir Computing" optoélectronique. En plus des taux de classifications obtenus meilleurs que l'état de l'art, une étude complète du compromis coût énergétique performance démontre la validité de cette approche. Enfin, le problème de la restauration d'image par maximum de vraisemblance est abordé à l'aide d'une implémentation optoélectronique appropriée de type "coherent Ising Machine"
Artificial Intelligence has revolutionized the scientific community thanks to the advent of a robust computation workforce and Artificial Neural Neural Networks. However, the current implementation trends introduce a rapidly growing demand for computational power surpassing the rates and limitations of Moore's and Koomey's Laws, which implies an eventual efficiency barricade. To respond to these demands, bio-inspired techniques, known as 'neuro-morphic' systems, are proposed using physical devices. Of these systems, we focus on 'Reservoir Computing' and 'Coherent Ising Machines' in our works.Reservoir Computing, for instance, demonstrated its computation power such as the state-of-the-art performance of up to 1 million words per second using photonic hardware in 2017. We propose an automatic hyperparameter tuning algorithm for Reservoir Computing and give a theoretical study of its convergence. Moreover, we propose Reservoir Computing for early-stage Alzheimer's disease detection with a thorough assessment of the energy costs versus performance compromise. Finally, we confront the noisy image restoration problem by maximum a posteriori using an optoelectronic implementation of a Coherent Ising Machine
39

Johansson, Richard. "Machine learning på tidsseriedataset : En utvärdering av modeller i Azure Machine Learning Studio". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71223.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In line with technology advancements in processing power and storing capabilities through cloud services, higher demands are set on companies’ data sets. Business executives are now expecting analyses of real time data or massive data sets, where traditional Business Intelligence struggle to deliver. The interest of using machine learning to predict trends and patterns which the human eye can’t see is thus higher than ever. Time series data sets are data sets characterised by a time stamp and a value; for example, a sensor data set. The company with which I’ve been in touch collects data from sensors in a control room. In order to predict patterns and in the future using these in combination with other data, the company wants to apply machine learning on their data set. To do this effectively, the right machine learning model needs to be selected. This thesis therefore has the purpose of finding out which machine learning model, or models, from the selected platform – Azure Machine Learning Studio – works best on a time series data set with sensor data. The models are then tested through a machine learning pilot on the company’s data Throughout the thesis, multiple machine learning models from the selected platform are evaluated. For the data set in hand, the conclusion is that a supervised regression model by the type of a Decision Forest Regression model gives the best results and has the best chance to adapt to a data set growing in size. Another conclusion is that more training data is needed to give the model an even better result, especially since it’s taking date and week day into account. Adjustments of the parameters for each model might also affect the result, opening up for further improvements.
40

Wu, Anjian M. B. A. Sloan School of Management. "Performance modeling of human-machine interfaces using machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122599.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 70-71).
As the popularity of online retail expands, world-class electronic commerce (e-commerce) businesses are increasingly adopting collaborative robotics and Internet of Things (IoT) technologies to enhance fulfillment efficiency and operational advantage. E-commerce giants like Alibaba and Amazon are known to have smart warehouses staffed by both machines and human operators. The robotics systems specialize in transporting and maneuvering heavy shelves of goods to and from operators. Operators are left to higher-level cognitive tasks needed to process goods such as identification and complex manipulation of individual objects. Achieving high system throughput in these systems require harmonized interaction between humans and machines. The robotics systems must minimize time that operators are waiting for new work (idle time) and operators need to minimize time processing items (takt time). Over time, these systems will naturally generate extensive amounts of data. Our research provides insights into both using this data to design a machine-learning (ML) model of takt time, as well as exploring methods of interpreting insights from such a model. We start by presenting our iterative approach to developing a ML model that predicts the average takt of a group of operators at hourly intervals. Our final XGBoost model reached an out-of-sample performance of 4.01% mean absolute percent error (MAPE) using over 250,000 hours of historic data across multiple warehouses around the world. Our research will share methods to cross-examine and interpret the relationships learned by the model for business value. This can allow organizations to effectively quantify system trade-offs as well as identify root-causes of takt performance deviations. Finally, we will discuss the implications of our empirical findings.
by Anjian Wu.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
41

Pilkington, Nicholas Charles Victor. "Hyperparameter optimisation for multiple kernels". Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648763.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Merat, Sepehr. "Clustering Via Supervised Support Vector Machines". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/857.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An SVM-based clustering algorithm is introduced that clusters data with no a priori knowledge of input classes. The algorithm initializes by first running a binary SVM classifier against a data set with each vector in the set randomly labeled. Once this initialization step is complete, the SVM confidence parameters for classification on each of the training instances can be accessed. The lowest confidence data (e.g., the worst of the mislabeled data) then has its labels switched to the other class label. The SVM is then re-run on the data set (with partly re-labeled data). The repetition of the above process improves the separability until there is no misclassification. Variations on this type of clustering approach are shown.
43

Ohlsson, Caroline. "Exploring the potential of machine learning : How machine learning can support financial risk management". Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324684.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
For decades, there have been developments of computer software to support human decision making. Along with the increased complexity of business environments, smart technologies are becoming popular and useful for decision support based on huge amount of information and advanced analysis. The aim of this study was to explore the potential of using machine learning for financial risk management in debt collection, with a purpose of providing a clear description of what possibilities and difficulties there are. The exploration was done from a business perspective in order to complement previous research using a computer science approach which centralizes on the development and testing of algorithms. By conducting a case study at Tieto, who provides a market leading debt collection system, data was collected about the process and the findings were analyzed based on machine learning theories. The results showed that machine learning has the potential to improve the predictions for risk assessment through advanced pattern recognition and adapting to changes in the environment. Furthermore, it also has the potential to provide the decision maker with customized suggestions for suitable risk mitigation strategies based on experiences and evaluations of previous strategic decisions. However, the issues related to data availability were concluded as potential difficulties due to the limitations of accessing more data from authorities through an automated process. Moreover, the potential is highly dependent on future laws and regulations for data management which will affect the difficulty of data availability further.
44

Ghule, S. "Computational development of the strategies to explore molecular machines and the molecular space for desired properties using machine learning". Thesis(Ph.D.), CSIR-National Chemical Laboratory, Pune, 2022. http://dspace.ncl.res.in:8080/xmlui/handle/20.500.12252/6165.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ph. D. Thesis of Siddharth Ghule Division: Physical and Material Chemistry NCL-ID No.: 11865 AcSIR Roll No.: 10CC17A26010 Name of research Guide/co-guide: Dr. Kumar Vanka Names of DAC Members: Paresh Laxmikant Dhepe, Kavita Joshi, Nayana Vaval Date of joining NCL: 24-07-2017 Faculty: Chemical Sciences
For thousands of years, scientific discoveries have played a vital role in the progress of human civilization. The discovery of new materials or new scientific phenomena, or an improved understanding of the known phenomena requires exploration through the space available for a given class of molecules (the molecular space). The typical size of molecular space is estimated to be ~1060, which is larger than the number of stars in the observable universe (~1024). Conventional experimental, computational, and algorithmic approaches are inefficient in exploring this vast molecular space. Furthermore, conventional exploration strategies do not take advantage of the large databases available today. On the other hand, machine learning (ML) algorithms can extract hidden knowledge from large datasets. They have shown excellent predictive accuracies in many fields, surpassing the traditional methods. Thus, ML algorithms are promising candidates for developing efficient exploration strategies for the vast molecular space. In this thesis work, we have demonstrated the development of exploration strategies using machine learning algorithms for three different molecular spaces. The first molecular space investigated in this thesis includes battery materials based on phenazine molecules. We have developed an accurate hybrid DFT-ML approach to explore this molecular space. We showed that 2D molecular features are most informative in predicting the redox potential of phenazine derivatives in DME. We also showed that it is possible to develop reasonably accurate machine learning models for complex quantities such as redox potential using small and simple datasets. Next, we investigated different unsupervised machine learning algorithms to explore the molecular space of DNA and proteins to uncover the interactions between them. We have shown that unsupervised machine learning models can discover commonly occurring regulatory modules containing interacting and co-binding transcription factors without prior information on binding activities. Sometimes, in fundamental research, one may encounter the desired property, which cannot be easily computed using existing methodologies. We faced this issue during the investigation of molecular machines. Therefore, we developed an algorithm for quantifying the desired property (i.e., rotational motion) of the ring in the molecular machines. We also investigated linear regression, a machine learning algorithm, during the development. The developed algorithm helped us get an insight into different factors responsible for the rotational directionality of the ring in the rotaxane system. Thus, this thesis work demonstrates the applicability of machine learning and computational tools to the development of efficient exploration strategies for molecular space. This work also shows how to address different issues one may encounter during the development. Furthermore, the specific strategies developed for three molecular spaces are valuable for discovering new molecules and new scientific phenomena. For example, the hybrid DFT-ML approach can help discover promising phenazine derivatives for green energy storage systems such as RFB. The unsupervised machine learning approach developed in this study has the potential to identify genetic determinants of diseases. The algorithm developed for quantifying rotation would help experimentalists develop novel molecular machines having rotational directionality.
AcSIR
45

Grangier, David. "Machine learning for information retrieval". Lausanne : École polytechnique fédérale de Lausanne, 2008. http://aleph.unisg.ch/volltext/464553_Grangier_Machine_learning_for_information_retrieval.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Baglioni, Cecilia. "Processi Gaussiani e Machine Learning". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20704/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In questa tesi si approfondisce la teoria dei Processi Gaussiani utilizzando i modelli di regressione per arrivare alla distribuzione predittiva a posteriori e studiandone le proprietà di differenziabilità per le Greche: in questo senso si introduce il Machine Learning tramite la tecnica del Supervised Learning. Nel primo capitolo si elencano le ipotesi alla base del modello di Black e Scholes e il procedimento per ottenere la formula. Si arriva, così, alla definizione delle Greche utili a valutare la sensibilità del portafoglio rispetto alla variazione dei fattori da cui la formula di Black e Scholes dipende. Nel secondo capitolo, partendo da un modello di regressione lineare, si arriva alla distribuzione predittiva a posteriori. Per superare la limitatezza del modello lineare Bayesiano si introduce un nuovo modello di regressione che conserva la linearità rispetto ai parametri e che viene applicato ad un nuovo spazio detto feature: si ottiene una nuova espressione per la distribuzione predittiva con un minor costo computazionale. Il modello in questione rappresenta un esempio di Processo Gaussiano con media e covarianza definite. Si arriva a trovare un collegamento tra le Greche e i processi Gaussiani: Delta e Gamma, infatti, sono calcolate a partire dalla derivata prima e seconda del processo in questione. Nel terzo capitolo, si applicano i risultati teorici a dati ottenuti con il modello di Black e Scholes. Si fa uso del pacchetto R DiceKriging con le funzioni km, predict e simulate. Il training set è composto dal prezzo della Put, valori della Delta e prezzo del sottostante calcolati con la formula di Black e Scholes, il validation set da 300 osservazioni. Km fornisce la stima dei parametri di covarianza e di media. Predict e simulate restituiscono i valori della media e deviazione standard predetti e i valori degli output dell'insieme test. Infine i metodi si confrontano calcolando gli errori, costo computazionale e memoria.
47

Liao, Yihua. "Machine learning in intrusion detection /". For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Strobl, Carolin. "Statistical Issues in Machine Learning". Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-89043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Stendahl, Jonas, e Johan Arnör. "Gesture Keyboard USING MACHINE LEARNING". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157141.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The market for mobile devices is expanding rapidly. Input of text is a large part of using a mobile device and an input method that is convenient and fast is therefore very interesting. Gesture keyboards allow the user to input text by dragging a finger over the letters in the desired word. This study investigates if enhancements of gesture keyboards can be accomplished using machine learning. A gesture keyboard was developed based on an algorithm which used a Multilayer Perceptron with backpropagation and evaluated. The results indicate that the evaluated implementation is not an optimal solution to the problem of recognizing swiped words.
Marknaden för mobila enheter expanderar kraftigt. Inmatning är en viktig del vid användningen av sådana produkter och en inmatningsmetod som är smidig och snabb är därför mycket intressant. Ett tangentbord för gester erbjuder användaren möjligheten att skriva genom att dra fingret över bokstäverna i det önskade ordet. I denna studie undersöks om tangentbord för gester kan förbättras med hjälp av maskininlärning. Ett tangentbord som använde en Multilayer Perceptron med backpropagation utvecklades och utvärderades. Resultaten visar att den undersökta implementationen inte är en optimal lösning på problemet att känna igen ord som matas in med hjälp av gester.
50

Bergkvist, Markus, e Tobias Olandersson. "Machine learning in simulated RoboCup". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3827.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An implementation of the Electric Field Approach applied to the simulated RoboCup is presented, together with a demonstration of a learning system. Results are presented from the optimization of the Electric Field parameters in a limited situation, using the learning system. Learning techniques used in contemporary RoboCup research are also described including a brief presentation of their results.

Vai alla bibliografia