Dissertations / Theses on the topic 'Systems for Machine Learning'

To see the other types of publications on this topic, follow the link: Systems for Machine Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Systems for Machine Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shukla, Ritesh. "Machine learning ecosystem : implications for business strategy centered on machine learning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/107342.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 48-50).
As interest for adopting machine learning as a core component of a business strategy increases, business owners face the challenge of integrating an uncertain and rapidly evolving technology into their organization, and depending on this for the success of their strategy. The field of Machine learning has a rich set of literature for modeling of technical systems that implement machine learning. This thesis attempts to connect the literature for business and technology and for evolution and adoption of technology to the emergent properties of machine learning systems. This thesis provides high-level levers and frameworks to better prepare business owners to adopt machine learning to satisfy their strategic goals.
by Ritesh Shukla.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
2

Andersson, Viktor. "Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning Algorithms." Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64721.

Full text
Abstract:
Data Ductus is a Swedish IT-consultant company, their customer base ranging from small startups to large scale cooperations. The company has steadily grown since the 80s and has established offices in both Sweden and the US. With the help of machine learning, this project will present a possible solution to the errors caused by the human factor in the logistic business.A way of preprocessing data before applying it to a machine learning algorithm, as well as a couple of algorithms to use will be presented.
Data Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
APA, Harvard, Vancouver, ISO, and other styles
3

Swere, Erick A. R. "Machine learning in embedded systems." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/4969.

Full text
Abstract:
This thesis describes novel machine learning techniques specifically designed for use in real-time embedded systems. The techniques directly address three major requirements of such learning systems. Firstly, learning must be capable of being achieved incrementally, since many applications do not have a representative training set available at the outset. Secondly, to guarantee real-time performance, the techniques must be able to operate within a deterministic and limited time bound. Thirdly, the memory requirement must be limited and known a priori to ensure the limited memory available to hold data in embedded systems will not be exceeded. The work described here has three principal contributions. The frequency table is a data structure specifically designed to reduce the memory requirements of incremental learning in embedded systems. The frequency table facilitates a compact representation of received data that is sufficient for decision tree generation. The frequency table decision tree (FTDT) learning method provides classification performance similar to existing decision tree approaches, but extends these to incremental learning while substantially reducing memory usage for practical problems. The incremental decision path (IDP) method is able to efficiently induce, from the frequency table of observations, the path through a decision tree that is necessary for the classification of a single instance. The classification performance of IDP is equivalent to that of existing decision tree algorithms, but since IDP allows the maximum number of partial decision tree nodes to be determined prior to the generation of the path, both the memory requirement and the execution time are deterministic. In this work, the viability of the techniques is demonstrated through application to realtime mobile robot navigation.
APA, Harvard, Vancouver, ISO, and other styles
4

Verleyen, Wim. "Machine learning for systems pathology." Thesis, University of St Andrews, 2013. http://hdl.handle.net/10023/4512.

Full text
Abstract:
Systems pathology attempts to introduce more holistic approaches towards pathology and attempts to integrate clinicopathological information with “-omics” technology. This doctorate researches two examples of a systems approach for pathology: (1) a personalized patient output prediction for ovarian cancer and (2) an analytical approach differentiates between individual and collective tumour invasion. During the personalized patient output prediction for ovarian cancer study, clinicopathological measurements and proteomic biomarkers are analysed with a set of newly engineered bioinformatic tools. These tools are based upon feature selection, survival analysis with Cox proportional hazards regression, and a novel Monte Carlo approach. Clinical and pathological data proves to have highly significant information content, as expected; however, molecular data has little information content alone, and is only significant when selected most-informative variables are placed in the context of the patient's clinical and pathological measures. Furthermore, classifiers based on support vector machines (SVMs) that predict one-year PFS and three-year OS with high accuracy, show how the addition of carefully selected molecular measures to clinical and pathological knowledge can enable personalized prognosis predictions. Finally, the high-performance of these classifiers are validated on an additional data set. A second study, an analytical approach differentiates between individual and collective tumour invasion, analyses a set of morphological measures. These morphological measurements are collected with a newly developed process using automated imaging analysis for data collection in combination with a Bayesian network analysis to probabilistically connect morphological variables with tumour invasion modes. Between an individual and collective invasion mode, cell-cell contact is the most discriminating morphological feature. Smaller invading groups were typified by smoother cellular surfaces than those invading collectively in larger groups. Interestingly, elongation was evident in all invading cell groups and was not a specific feature of single cell invasion as a surrogate of epithelialmesenchymal transition. In conclusion, the combination of automated imaging analysis and Bayesian network analysis provides an insight into morphological variables associated with transition of cancer cells between invasion modes. We show that only two morphologically distinct modes of invasion exist. The two studies performed in this thesis illustrate the potential of a systems approach for pathology and illustrate the need of quantitative approaches in order to reveal the system behind pathology.
APA, Harvard, Vancouver, ISO, and other styles
5

Roderus, Jens, Simon Larson, and Eric Pihl. "Hadoop scalability evaluation for machine learning algorithms on physical machines : Parallel machine learning on computing clusters." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20102.

Full text
Abstract:
The amount of available data has allowed the field of machine learning to flourish. But with growing data set sizes comes an increase in algorithm execution times. Cluster computing frameworks provide tools for distributing data and processing power on several computer nodes and allows for algorithms to run in feasible time frames when data sets are large. Different cluster computing frameworks come with different trade-offs. In this thesis, the scalability of the execution time of machine learning algorithms running on the Hadoop cluster computing framework is investigated. A recent version of Hadoop and algorithms relevant in industry machine learning, namely K-means, latent Dirichlet allocation and naive Bayes are used in the experiments. This paper provides valuable information to anyone choosing between different cluster computing frameworks. The results show everything from moderate scalability to no scalability at all. These results indicate that Hadoop as a framework may have serious restrictions in how well tasks are actually parallelized. Possible scalability improvements could be achieved by modifying the machine learning library algorithms or by Hadoop parameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
6

Johansson, Richard. "Machine learning på tidsseriedataset : En utvärdering av modeller i Azure Machine Learning Studio." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71223.

Full text
Abstract:
In line with technology advancements in processing power and storing capabilities through cloud services, higher demands are set on companies’ data sets. Business executives are now expecting analyses of real time data or massive data sets, where traditional Business Intelligence struggle to deliver. The interest of using machine learning to predict trends and patterns which the human eye can’t see is thus higher than ever. Time series data sets are data sets characterised by a time stamp and a value; for example, a sensor data set. The company with which I’ve been in touch collects data from sensors in a control room. In order to predict patterns and in the future using these in combination with other data, the company wants to apply machine learning on their data set. To do this effectively, the right machine learning model needs to be selected. This thesis therefore has the purpose of finding out which machine learning model, or models, from the selected platform – Azure Machine Learning Studio – works best on a time series data set with sensor data. The models are then tested through a machine learning pilot on the company’s data Throughout the thesis, multiple machine learning models from the selected platform are evaluated. For the data set in hand, the conclusion is that a supervised regression model by the type of a Decision Forest Regression model gives the best results and has the best chance to adapt to a data set growing in size. Another conclusion is that more training data is needed to give the model an even better result, especially since it’s taking date and week day into account. Adjustments of the parameters for each model might also affect the result, opening up for further improvements.
APA, Harvard, Vancouver, ISO, and other styles
7

Schneider, C. "Using unsupervised machine learning for fault identification in virtual machines." Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/7327.

Full text
Abstract:
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
APA, Harvard, Vancouver, ISO, and other styles
8

Michailidis, Marios. "Investigating machine learning methods in recommender systems." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/10031000/.

Full text
Abstract:
This thesis investigates the use of machine learning in improving predictions of the top K* product purchases at a particular a retailer. The data used for this research is a freely-available (for research) sample of the retailer’s transactional data spanning a period of 102 weeks and consisting of several million observations. The thesis consists of four key experiments: 1. Univariate Analysis of the Dataset: The first experiment, which is the univariate analysis of the dataset, sets the background to the following chapters. It provides explanatory insight into the customers’ shopping behaviour and identifies the drivers that connect customers and products. Using various behavioural, descriptive and aggregated features, the training dataset for a group of customers is created to map their future purchasing actions for one specific week. The test dataset is then constructed to predict the purchasing actions for the forthcoming week. This constitutes a univariate analysis and the chapter is an introduction to the features included in the subsequent algorithmic processes. 2. Meta-modelling to predict top K products: The second experiment investigates the improvement in predicting the top K products in terms of precision at K (or precision@K) and Area Under Curve (AUC) through meta-modelling. It compares combining a range of common machine learning algorithms of a supervised nature within a meta-modelling framework (where each generated model will be an input to a secondary model) with any single model involved, field benchmark or simple model combination method. 3. Hybrid method to predict repeated, promotion-driven product purchases in an irregular testing environment: The third experiment demonstrates a hybrid methodology of cross validation, modelling and optimization for improving the accuracy of predicting the products the customers of a retailer will buy after havingbought them at least once with a promotional coupon. This methodology is applied in the context of a train and test environment with limited overlap - the test data includes different coupons, different customers and different time periods. Additionally this chapter uses a real life application and a stress-test of the findings in the feature engineering space from experiment 1. It also borrows ideas from ensemble (or meta) modelling as detailed in experiment 2. 4. The StackNet model: The fourth experiment proposes a framework in the form of a scalable version of [Wolpert 1992] stacked generalization being extended through cross validation methods to many levels resembling in structure a fully connected feedforward neural network where the hidden nodes represent complex functions in the form of machine learning models of any nature. The implementation of the model is made available in the Java programming language. The research contribution of this thesis is to improve the recommendation science used in the grocery and Fast Moving Consumer Goods (FMCG) markets. It seeks to identify methods of increasing the accuracy of predicting what customers are going to buy in the future by leveraging up-to-date innovations in machine learning as well as improving current processes in the areas of feature engineering, data pre-processing and ensemble modelling. For the general scientific community this thesis can be exploited to better understand the type of data available in the grocery market and to gain insights into how to structure similar machine learning and analytical projects. The extensive, computational and algorithmic framework that accompanies this thesis is also available for general use as a prototype to solve similar data challenges. References: Wolpert, D. H. (1992). Stacked generalization. Neural networks, 5(2), 241-259. Yang, X., Steck, H., Guo, Y., & Liu, Y. (2012). On top-k recommendation using social networks. In Proceedings of the sixth ACM conference on Recommender systems (pp. 67-74). ACM.
APA, Harvard, Vancouver, ISO, and other styles
9

Ilyas, Andrew. "On practical robustness of machine learning systems." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122911.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-79).
We consider the importance of robustness in evaluating machine learning systems, an in particular systems involving deep learning. We consider these systems' vulnerability to adversarial examples--subtle, crafted perturbations to inputs which induce large change in output. We show that these adversarial examples are not only theoretical concern, by desigining the first 3D adversarial objects, and by demonstrating that these examples can be constructed even when malicious actors have little power. We suggest a potential avenue for building robust deep learning models by leveraging generative models.
by Andrew Ilyas.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

ROSA, BRUSIN ANN MARGARETH. "Machine Learning Applications to Optical Communication Systems." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2967019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Thomas, Sabin M. (Sabin Mammen). "A system analysis of improvements in machine learning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100386.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, February 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 50-51).
Machine learning algorithms used for natural language processing (NLP) currently take too long to complete their learning function. This slow learning performance tends to make the model ineffective for an increasing requirement for real time applications such as voice transcription, language translation, text summarization topic extraction and sentiment analysis. Moreover, current implementations are run in an offline batch-mode operation and are unfit for real time needs. Newer machine learning algorithms are being designed that make better use of sampling and distributed methods to speed up the learning performance. In my thesis, I identify unmet market opportunities where machine learning is not employed in an optimum fashion. I will provide system level suggestions and analyses that could improve the performance, accuracy and relevance.
by Sabin M. Thomas.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
12

Tynong, Anton. "Machine learning for planning in warehouse management." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chi, Chih-Lin Street William N. "Medical decision support systems based on machine learning." Iowa City : University of Iowa, 2009. http://ir.uiowa.edu/etd/283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hsu, David. "Silicon primitives for machine learning /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chi, Chih-Lin. "Medical decision support systems based on machine learning." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/283.

Full text
Abstract:
This dissertation discusses three problems from different areas of medical research and their machine learning solutions. Each solution is a distinct type of decision support system. They show three common properties: personalized healthcare decision support, reduction of the use of medical resources, and improvement of outcomes. The first decision support system assists individual hospital selection. This system can help a user make the best decision in terms of the combination of mortality, complication, and travel distance. Both machine learning and optimization techniques are utilized in this type of decision support system. Machine learning methods, such as Support Vector Machines, learn a decision function. Next, the function is transformed into an objective function and then optimization methods are used to find the values of decision variables to reach the desired outcome with the most confidence. The second decision support system assists diagnostic decisions in a sequential decision-making setting by finding the most promising tests and suggesting a diagnosis. The system can speed up the diagnostic process, reduce overuse of medical tests, save costs, and improve the accuracy of diagnosis. In this study, the system finds the test most likely to confirm a diagnosis based on the pre-test probability computed from the patient's information including symptoms and the results of previous tests. If the patient's disease post-test probability is higher than the treatment threshold, a diagnostic decision will be made, and vice versa. Otherwise, the patient needs more tests to help make a decision. The system will then recommend the next optimal test and repeat the same process. The third decision support system recommends the best lifestyle changes for an individual to lower the risk of cardiovascular disease (CVD). As in the hospital recommendation system, machine learning and optimization are combined to capture the relationship between lifestyle and CVD, and then generate recommendations based on individual factors including preference and physical condition. The results demonstrate several recommendation strategies: a whole plan of lifestyle changes, a package of n lifestyle changes, and the compensatory plan (the plan that compensates for unwanted lifestyle changes or real-world limitations).
APA, Harvard, Vancouver, ISO, and other styles
16

Eagle, Nathan Norfleet. "Machine perception and learning of complex social systems." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32498.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Includes bibliographical references (p. 125-136).
The study of complex social systems has traditionally been an arduous process, involving extensive surveys, interviews, ethnographic studies, or analysis of online behavior. Today, however, it is possible to use the unprecedented amount of information generated by pervasive mobile phones to provide insights into the dynamics of both individual and group behavior. Information such as continuous proximity, location, communication and activity data, has been gathered from the phones of 100 human subjects at MIT. Systematic measurements from these 100 people over the course of eight months has generated one of the largest datasets of continuous human behavior ever collected, representing over 300,000 hours of daily activity. In this thesis we describe how this data can be used to uncover regular rules and structure in behavior of both individuals and organizations, infer relationships between subjects, verify self- report survey data, and study social network dynamics. By combining theoretical models with rich and systematic measurements, we show it is possible to gain insight into the underlying behavior of complex social systems.
by Nathan Norfleet Eagle.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Prueller, Hans. "Distributed online machine learning for mobile care systems." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10875.

Full text
Abstract:
Telecare and especially Mobile Care Systems are getting more and more popular. They have two major benefits: first, they drastically improve the living standards and even health outcomes for patients. In addition, they allow significant cost savings for adult care by reducing the needs for medical staff. A common drawback of current Mobile Care Systems is that they are rather stationary in most cases and firmly installed in patients’ houses or flats, which makes them stay very near to or even in their homes. There is also an upcoming second category of Mobile Care Systems which are portable without restricting the moving space of the patients, but with the major drawback that they have either very limited computational abilities and only a rather low classification quality or, which is most frequently, they only have a very short runtime on battery and therefore indirectly restrict the freedom of moving of the patients once again. These drawbacks are inherently caused by the restricted computational resources and mainly the limitations of battery based power supply of mobile computer systems. This research investigates the application of novel Artificial Intelligence (AI) and Machine Learning (ML) techniques to improve the operation of 2 Mobile Care Systems. As a result, based on the Evolving Connectionist Systems (ECoS) paradigm, an innovative approach for a highly efficient and self-optimising distributed online machine learning algorithm called MECoS - Moving ECoS - is presented. It balances the conflicting needs of providing a highly responsive complex and distributed online learning classification algorithm by requiring only limited resources in the form of computational power and energy. This approach overcomes the drawbacks of current mobile systems and combines them with the advantages of powerful stationary approaches. The research concludes that the practical application of the presented MECoS algorithm offers substantial improvements to the problems as highlighted within this thesis.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Yizhan. "Machine Learning Based Beam Tracking in mmWave Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292754.

Full text
Abstract:
The demand for high data rates communication and scarcity of spectrum in existing microwave bands has been the key aspect in 5G. To fulfill these demands, the millimeter wave (mmWave) with large bandwidths has been proposed to enhance the efficiency and the stability of the 5G network. In mmWave communication, the concentration of the transmission signal from the antenna is conducted by beamforming and beam tracking. However, state-of-art methods in beam tracking lead to high resource consumption. To address this problem, we develop 2 machine-learning-based solutions for overhead reduction. In this paper, a scenario configuration simulator is proposed as the data collection approach. Several LSTM based time series prediction models are trained for experiments. Since the overhead is reduced by decreasing the number of sweeping beams in solutions, multiple data imputation methods are proposed to improve the performance of the solution. These methods are based on Multiple Imputation by Chained Equations (MICE) and generative adversarial networks. Both qualitative and quantitative experimental results on several types of datasets demonstrate the efficacy of our solution.
Efterfrågan på hög datahastighetskommunikation och brist på spektrum i befintliga mikrovågsband har varit nyckelaspekten i 5G. För att uppfylla dessa krav har millimetervåg (mmWave) med stora bandbredder föreslagits för att förbättra effektiviteten och stabiliteten i 5G-nätverket. I mmWavekommunikation utförs koncentrationen av överföringssignalen från antennen genom strålformning och strålspårning. Toppmoderna metoder inom strålspårning leder dock till hög resursförbrukning. För att lösa detta problem utvecklar vi två maskininlärningsbaserade lösningar för reduktion av omkostnader. I det här dokumentet föreslås en scenariokonfigurationssimulator som datainsamlingsmetod. Flera LSTM-baserade modeller för förutsägelse av tidsserier tränas för experiment. Eftersom omkostnaderna reduceras genom att minska svepstrålarna i lösningar föreslås flera datainputeringsmetoder för att förbättra lösningens prestanda. Dessa metoder är baserade på Multipel Imputation by Chained Equations (MICE) och generativa kontroversiella nätverk. Både kvalitativa och kvantitativa experimentella resultat på flera typer av datamängder visar effektiviteten i vår lösning.
APA, Harvard, Vancouver, ISO, and other styles
19

Al-Khoury, Fadi. "Safety of Machine Learning Systems in Autonomous Driving." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-218020.

Full text
Abstract:
Machine Learning, and in particular Deep Learning, are extremely capable tools for solving problems which are difficult, or intractable to tackle analytically. Application areas include pattern recognition, computer vision, speech and natural language processing. With the automotive industry aiming for increasing amount of automation in driving, the problems to solve become increasingly complex, which appeals to the use of supervised learning methods from Machine Learning and Deep Learning. With this approach, solutions to the problems are learned implicitly from training data, and inspecting their correctness is not possible directly. This presents concerns when the resulting systems are used to support safety-critical functions, as is the case with autonomous driving of automotive vehicles. This thesis studies the safety concerns related to learning systems within autonomous driving and applies a safety monitoring approach to a collision avoidance scenario. Experiments are performed using a simulated environment, with a deep learning system supporting perception for vehicle control, and a safety monitor for collision avoidance. The related operational situations and safety constraints are studied for an autonomous driving function, with potential faults in the learning system introduced and examined. Also, an example is considered for a measure that indicates trustworthiness of the learning system during operation.
Maskininlärning, och i synnerhet deep learning, är extremt kapabla verktyg för att lösa problem  som är svåra, eller omöjliga att hantera analytiskt. Applikationsområden inkluderar  mönsterigenkänning, datorseende, tal‐ och språkförståelse. När utvecklingen inom bilindustrin  går mot en ökad grad av automatisering, blir problemen som måste lösas alltmer komplexa,  vilket har lett till ett ökat användande av metoder från maskininlärning och deep learning. Med  detta tillvägagångssätt lär sig systemet lösningen till ett problem implicit från träningsdata och  man kan inte direkt utvärdera lösningens korrekthet. Detta innebär problem när systemet i  fråga är del av en säkerhetskritisk funktion, vilket är fallet för självkörande fordon. Detta  examensarbete behandlar säkerhetsaspekter relaterade till maskininlärningssystem i autonoma  fordon och applicerar en safety monitoring‐metodik på en kollisionsundvikningsfunktion.  Simuleringar utförs, med ett deep learning‐system som del av systemet för perception, som ger  underlag för styrningen av fordonet, samt en safety monitor för kollisionsundvikning. De  relaterade operationella situationerna och säkerhetsvillkoren studeras för en autonom  körnings‐funktion, där potentiella fel i det lärande systemet introduceras och utvärderas.  Vidare introduceras ett förslag på ett mått på trovärdighet hos det lärande systemet under  drift.
APA, Harvard, Vancouver, ISO, and other styles
20

Salehi, Shahin. "Machine Learning for Contact Mechanics from Surface Topography." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ball, N. R. "Cognitive maps in Learning Classifier Systems." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Cui. "Image quality assessment using algorithmic and machine learning techniques." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Restricted: no access until June 2, 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=26521.

Full text
Abstract:
Thesis (Ph.D.)--Aberdeen University, 2009.
With: An image quality metric based in corner, edge and symmetry maps / Li Cui, Alastair R. Allen. With: An image quality metric based on a colour appearance model / Li Cui and Alastair R. Allen. ACIVS / J. Blanc-Talon et al. eds. 2008 LNCS 5259, 696-707. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Tarberg, Alexander. "Skydd av Kritisk Infrastruktur med Machine Learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74514.

Full text
Abstract:
Syftet med detta arbete är att utvärdera om Nash-strategi kan vara ett alternativ till Stackelberg för skydd av tågstationer, med hjälp av Machine Learning och spelteori. Detta arbete beskriver utvecklingen och testning av två algoritmer i en simulerad miljö, schack. Schack fungerade som bra testmiljö, för att få konkreta resultat och poängsättning. Samtidigt fick undertecknad bekräftelse att Nash-strategin fungerade på spel där all information erhålls.      Dessa resultat och den bäst presterande algoritmen användes för att bestämma på vilka stationer man bör placera NBK-sensorer, som skyddar mot nukleära, biologiska och kemiska attacker. Resultaten av studien visade att Nash-strategi med hjälp av Minimax-algoritmen är ett hållbart alternativ till Stackelberg inom både säkerhetsdomänen, men också utanför. Slutsatsen som drogs var att Nash har god potential för framtida studier och att ytterligare studier bör genomföras för att undersöka fler variabler och effekterna av användning av Nash istället för Stackelberg.
The purpose of this paper is to evaluate if the Nash-strategy could be an alternative to Stackelberg for protection of subway stations, with the help of Machine Learning and Game Theory. This thesis describes the development and testing of two algorithms in a simulated environment, chess. Chess worked as a test environment, to get accurate results and scoring. At the same time the author got confirmation that the Nashstrategy worked for games were all information is available.      These results and the best performing algorithm were used to decide on which stations NBK-sensors should be placed, which protects against nuclear, biological and chemical attacks. The results of the study showed that the Nash-strategy with the help of the Minimax algorithm is a viable option to Stackelberg in the security domain, but also outside the security domain. The conclusions that were made is that Nash has good potential for future studies and should be examined further with more variables and the effects of using Nash instead of Stackelberg in security games
APA, Harvard, Vancouver, ISO, and other styles
24

Badayos, Noah Garcia. "Machine Learning-Based Parameter Validation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47675.

Full text
Abstract:
As power system grids continue to grow in order to support an increasing energy demand, the system's behavior accordingly evolves, continuing to challenge designs for maintaining security. It has become apparent in the past few years that, as much as discovering vulnerabilities in the power network, accurate simulations are very critical. This study explores a classification method for validating simulation models, using disturbance measurements from phasor measurement units (PMU). The technique used employs the Random Forest learning algorithm to find a correlation between specific model parameter changes, and the variations in the dynamic response. Also, the measurements used for building and evaluating the classifiers were characterized using Prony decomposition. The generator model, consisting of an exciter, governor, and its standard parameters have been validated using short circuit faults. Single-error classifiers were first tested, where the accuracies of the classifiers built using positive, negative, and zero sequence measurements were compared. The negative sequence measurements have consistently produced the best classifiers, with majority of the parameter classes attaining F-measure accuracies greater than 90%. A multiple-parameter error technique for validation has also been developed and tested on standard generator parameters. Only a few target parameter classes had good accuracies in the presence of multiple parameter errors, but the results were enough to permit a sequential process of validation, where elimination of a highly detectable error can improve the accuracy of suspect errors dependent on the former's removal, and continuing the procedure until all corrections are covered.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Chlon, Leon. "Machine learning methods for cancer immunology." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/268068.

Full text
Abstract:
Tumours are highly heterogeneous collections of tissues characterised by a repertoire of heavily mutated and rapidly proliferating cells. Evading immune destruction is a fundamental hallmark of cancer, and elucidating the contextual basis of tumour-infiltrating leukocytes is pivotal for improving immunotherapy initiatives. However, progress in this domain is hindered by an incomplete characterisation of the regulatory mechanisms involved in cancer immunity. Addressing this challenge, this thesis is formulated around a fundamental line of inquiry: how do we quantitatively describe the immune system with respect to tumour heterogeneity? Describing the molecular interactions between cancer cells and the immune system is a fundamental goal of cancer immunology. The first part of this thesis describes a three-stage association study to address this challenge in pancreatic ductal adenocarcinoma (PDAC). Firstly, network-based approaches are used to characterise PDAC on the basis of transcription factor regulators of an oncogenic KRAS signature. Next, gene expression tools are used to resolve the leukocyte subset mixing proportions, stromal contamination, immune checkpoint expression and immune pathway dysregulation from the data. Finally, partial correlations are used to characterise immune features in terms of KRAS master regulator activity. The results are compared across two independent cohorts for consistency. Moving beyond associations, the second part of the dissertation introduces a causal modelling approach to infer directed interactions between signaling pathway activity and immune agency. This is achieved by anchoring the analysis on somatic genomic changes. In particular, copy number profiles, transcriptomic data, image data and a protein-protein interaction network are integrated using graphical modelling approaches to infer directed relationships. Generated models are compared between independent cohorts and orthogonal datasets to evaluate consistency. Finally, proposed mechanisms are cross-referenced against literature examples to test for legitimacy. In summary, this dissertation provides methodological contributions, at the levels of associative and causal inference, for inferring the contextual basis for tumour-specific immune agency.
APA, Harvard, Vancouver, ISO, and other styles
26

Arnold, Naomi (Naomi Aiko). "Wafer defect prediction with statistical machine learning." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105633.

Full text
Abstract:
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 81-83).
In the semiconductor industry where the technology continues to grow in complexity while also striving to achieve lower manufacturing costs, it is becoming increasingly important to drive cost savings by screening out defective die upstream. The primary goal of the project is to build a statistical prediction model to facilitate operational improvements across two global manufacturing locations. The scope of the project includes one high-volume product line, an off-line statistical model using historical production data, and experimentation with machine learning algorithms. The prediction model pilot demonstrates there exists a potential to improve the wafer sort process using random forest classifier on wafer and die-level datasets. Yet more development is needed to conclude final memory test defect die-level predictions are possible. Key findings include the importance of model computational performance in big data problems, necessity of a living model that stays accurate over time to meet operational needs, and an evaluation methodology based on business requirements. This project provides a case study for a high-level strategy of assessing big data and advanced analytics applications to improve semiconductor manufacturing.
by Naomi Arnold.
S.M. in Engineering Systems
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
27

Thomson, John D. "Using machine learning to automate compiler optimisation." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3194.

Full text
Abstract:
Many optimisations in modern compilers have been traditionally based around using analysis to examine certain aspects of the code; the compiler heuristics then make a decision based on this information as to what to optimise, where to optimise and to what extent to optimise. The exact contents of these heuristics have been carefully tuned by experts, using their experience, as well as analytical tools, to produce solid performance. This work proposes an alternative approach – that of using proper statistical analysis to drive these optimisation goals instead of human intuition, through the use of machine learning. This work shows how, by using a probabilistic search of the optimisation space, we can achieve a significant speedup over the baseline compiler with the highest optimisation settings, on a number of different processor architectures. Additionally, there follows a further methodology for speeding up this search by being able to transfer our knowledge of one program to another. This thesis shows that, as is the case in many other domains, programs can be successfully represented by program features, which can then be used to gauge their similarity and thus the applicability of previously learned off-line knowledge. Employing this method, we are able to gain the same results in terms of performance, reducing the time taken by an order of magnitude. Finally, it is demonstrated how statistical analysis of programs allows us to learn additional important optimisation information, purely by examining the features alone. By incorporating this additional information into our model, we show how good results can be achieved in just one compilation. This work is tested on real hardware, for both the embedded and general purpose domain, showing its wide applicability.
APA, Harvard, Vancouver, ISO, and other styles
28

He, Haibo. "Dynamically Self-reconfigurable Systems for Machine Intelligence." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1152717376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Niemi, Mikael. "Machine Learning for Rapid Image Classification." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97375.

Full text
Abstract:
In this thesis project techniques for training a rapid image classifier that can recognize an object of a predefined type has been studied. Classifiers have been trained with the AdaBoost algorithm, with and without the use of Viola-Jones cascades. The use of Weight trimming in the classifier training has been evaluated and resulted in a significant speed up of the training, as well as improving the performance of the trained classifier. Different preprocessings of the images have also been tested, but resulted for the most part in worse performance for the classifiers when used individually. Several rectangle shaped Haar-like features including novel versions have been evaluated and the magnitude versions proved to be best at separating the image classes.
APA, Harvard, Vancouver, ISO, and other styles
30

Bepler, Tristan(Tristan Wendland). "Machine learning for understanding protein sequence and structure." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129888.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 183-200).
Proteins are the fundamental building blocks of life, carrying out a vast array of functions at the molecular level. Understanding these molecular machines has been a core problem in biology for decades. Recent advances in cryo-electron microscopy (cryoEM) has enabled high resolution experimental measurement of proteins in their native states. However, this technology remains expensive and low throughput. At the same time, ever growing protein databases offer new opportunities for understanding the diversity of natural proteins and for linking sequence to structure and function. This thesis introduces a variety of machine learning methods for accelerating protein structure determination by cryoEM and for learning from large protein databases. We first consider the problem of protein identification in the large images collected in cryoEM. We propose a positive-unlabeled learning framework that enables high accuracy particle detection with few labeled data points, both improving data quality and analysis speed. Next, we develop a deep denoising model for cryo-electron micrographs. By learning the denoising model from large amounts of real cryoEM data, we are able to capture the noise generation process and accurately denoise micrographs, improving the ability of experamentalists to examine and interpret their data. We then introduce a neural network model for understanding continuous variability in proteins in cryoEM data by explicitly disentangling variation of interest (structure) for nuisance variation due to rotation and translation. Finally, we move beyond cryoEM and propose a method for learning vector embeddings of proteins using information from structure and sequence. Many of the machine learning methods developed here are general purpose and can be applied to other data domains.
by Tristan Bepler.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Computational and Systems Biology Program
APA, Harvard, Vancouver, ISO, and other styles
31

Tham, Alan (Alan An Liang). "A guiding framework for applying machine learning in organizations." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107598.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-97).
Machine Learning (ML) is an emerging business capability that have transformed many organizations by enabling them to learn from past data and helping them predict or make decisions on unknown future events. While ML is no longer the preserve of large IT companies, there are abundant opportunities for mid-sized organizations who do not have the resources of the larger IT companies to exploit their data through ML so as to gain deeper insights. This thesis outlines these opportunities and provide guidance for the adoption of ML by these organizations. This thesis examines available literature on current state of adoption of ML by organizations which highlight the gaps that motivate the thesis in providing a guiding framework for applying ML. To achieve this, the thesis provides the practitioner with an overview of ML from both technology and business perspectives that are integrated from multiple sources, categorized for ease of reference and communicated at the decision making level without delving into the mathematics behind ML. The thesis thereafter proposes the ML Integration framework for the System Architect to review the enterprise model, identify opportunities, evaluate technology adoption and architect the ML System. In this framework, system architecting methodologies as well as Object-Process Diagrams are used to illustrate the concepts and the architecture. The ML Integration framework is subsequently applied in the context of a hypothetical mid-sized hospital to illustrate how an architect would go about utilizing this framework. Future work is needed to validate the ML Integration framework, as well as improve the overview of ML specific to application domains such as recommender systems and speech/image recognition.
by Alan Tham.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
32

Sheikholeslami, Sina. "Ablation Programming for Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258413.

Full text
Abstract:
As machine learning systems are being used in an increasing number of applications from analysis of satellite sensory data and health-care analytics to smart virtual assistants and self-driving cars they are also becoming more and more complex. This means that more time and computing resources are needed in order to train the models and the number of design choices and hyperparameters will increase as well. Due to this complexity, it is usually hard to explain the effect of each design choice or component of the machine learning system on its performance.A simple approach for addressing this problem is to perform an ablation study, a scientific examination of a machine learning system in order to gain insight on the effects of its building blocks on its overall performance. However, ablation studies are currently not part of the standard machine learning practice. One of the key reasons for this is the fact that currently, performing an ablation study requires major modifications in the code as well as extra compute and time resources.On the other hand, experimentation with a machine learning system is an iterative process that consists of several trials. A popular approach for execution is to run these trials in parallel, on an Apache Spark cluster. Since Apache Spark follows the Bulk Synchronous Parallel model, parallel execution of trials includes several stages, between which there will be barriers. This means that in order to execute a new set of trials, all trials from the previous stage must be finished. As a result, we usually end up wasting a lot of time and computing resources on unpromising trials that could have been stopped soon after their start.We have attempted to address these challenges by introducing MAGGY, an open-source framework for asynchronous and parallel hyperparameter optimization and ablation studies with Apache Spark and TensorFlow. This framework allows for better resource utilization as well as ablation studies and hyperparameter optimization in a unified and extendable API.
Eftersom maskininlärningssystem används i ett ökande antal applikationer från analys av data från satellitsensorer samt sjukvården till smarta virtuella assistenter och självkörande bilar blir de också mer och mer komplexa. Detta innebär att mer tid och beräkningsresurser behövs för att träna modellerna och antalet designval och hyperparametrar kommer också att öka. På grund av denna komplexitet är det ofta svårt att förstå vilken effekt varje komponent samt designval i ett maskininlärningssystem har på slutresultatet.En enkel metod för att få insikt om vilken påverkan olika komponenter i ett maskinlärningssytem har på systemets prestanda är att utföra en ablationsstudie. En ablationsstudie är en vetenskaplig undersökning av maskininlärningssystem för att få insikt om effekterna av var och en av dess byggstenar på dess totala prestanda. Men i praktiken så är ablationsstudier ännu inte vanligt förekommande inom maskininlärning. Ett av de viktigaste skälen till detta är det faktum att för närvarande så krävs både stora ändringar av koden för att utföra en ablationsstudie, samt extra beräkningsoch tidsresurser.Vi har försökt att ta itu med dessa utmaningar genom att använda en kombination av distribuerad asynkron beräkning och maskininlärning. Vi introducerar maggy, ett ramverk med öppen källkodsram för asynkron och parallell hyperparameteroptimering och ablationsstudier med PySpark och TensorFlow. Detta ramverk möjliggör bättre resursutnyttjande samt ablationsstudier och hyperparameteroptimering i ett enhetligt och utbyggbart API.
APA, Harvard, Vancouver, ISO, and other styles
33

Tripathi, Nandita. "Two-level text classification using hybrid machine learning techniques." Thesis, University of Sunderland, 2012. http://sure.sunderland.ac.uk/3305/.

Full text
Abstract:
Nowadays, documents are increasingly being associated with multi-level category hierarchies rather than a flat category scheme. To access these documents in real time, we need fast automatic methods to navigate these hierarchies. Today’s vast data repositories such as the web also contain many broad domains of data which are quite distinct from each other e.g. medicine, education, sports and politics. Each domain constitutes a subspace of the data within which the documents are similar to each other but quite distinct from the documents in another subspace. The data within these domains is frequently further divided into many subcategories. Subspace Learning is a technique popular with non-text domains such as image recognition to increase speed and accuracy. Subspace analysis lends itself naturally to the idea of hybrid classifiers. Each subspace can be processed by a classifier best suited to the characteristics of that particular subspace. Instead of using the complete set of full space feature dimensions, classifier performances can be boosted by using only a subset of the dimensions. This thesis presents a novel hybrid parallel architecture using separate classifiers trained on separate subspaces to improve two-level text classification. The classifier to be used on a particular input and the relevant feature subset to be extracted is determined dynamically by using a novel method based on the maximum significance value. A novel vector representation which enhances the distinction between classes within the subspace is also developed. This novel system, the Hybrid Parallel Classifier, was compared against the baselines of several single classifiers such as the Multilayer Perceptron and was found to be faster and have higher two-level classification accuracies. The improvement in performance achieved was even higher when dealing with more complex category hierarchies.
APA, Harvard, Vancouver, ISO, and other styles
34

Adjodah, Dhaval D. K. (Adjodlah Dhaval Dhamnidhi Kumar). "Understanding social influence using network analysis and machine learning." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81111.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 61-62).
If we are to enact better policy, fight crime and decrease poverty, we will need better computational models of how society works. In order to make computational social science a useful reality, we will need generative models of how social influence sprouts at the interpersonal level and how it leads to emergent social behavior. In this thesis, I take steps at understanding the predictors and conduits of social influence by analyzing real-life data, and I use the findings to create a high-accuracy prediction model of individuals' future behavior. The funf dataset which comprises detailed high-frequency data gathered from 25 mobile phone-based signals from 130 people over a period of 15 months, will be used to test the hypothesis that people who interact more with each other have a greater ability to influence each other. Various metrics of interaction will be investigated such as self-reported friendships, call and SMS logs and Bluetooth co-location signals. The Burt Network Constraint of each pair of participants is calculated as a measure of not only the direct interaction between two participants but also the indirect friendships through intermediate neighbors that form closed triads with both the participants being assessed. To measure influence, the results of the live funf intervention will be used where behavior change of each participant to be more physically active was rewarded, with the reward being calculated live. There were three variants of the reward structure: one where each participant was rewarded for her own behavior change without seeing that of anybody else (the control), one where each participant was paired up with two 'buddies' whose behavior change she could see live but she was still rewarded based on her own behavior, and one where each participant who was paired with two others was paid based on their behavior change that she could see live. As a metric for social influence, it will be considered how the change in slope and average physical activity levels of one person follows the change in slope and average physical activity levels of the buddy who saw her data and/or was rewarded based on her performance. Finally, a linear regression model that uses the various types of direction and indirect network interactions will be created to predict the behavior change of one participant based on her closeness with her buddy. In addition to explaining and demonstrating the causes of social influence with unprecedented detail using network analysis and machine learning, I will discuss the larger topic of using such a technology-driven approach to changing behavior instead of the traditional policy-driven approach. The advantages of the technology-driven approach will be highlighted and the potential political-economic pitfalls of implementing such a novel approach will also be addressed. Since technology-driven approaches to changing individual behavior can have serious negative consequences for democracy and the free-market, I will introduce a novel dimension to the discussion of how to protect individuals from the state and from powerful private organizations. Hence, I will describe how transparency policies and civic engagement technologies can further this goal of 'watching the watchers'.
by Dhaval D.K. Adjodah.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Han. "Rule based systems for classification in machine learning context." Thesis, University of Portsmouth, 2015. https://researchportal.port.ac.uk/portal/en/theses/rule-based-systems-for-classification-in-machine-learning-context(1790225c-ceb1-48d3-9e05-689edbfa3ef1).html.

Full text
Abstract:
This thesis introduces a unified framework for design of rule based systems for classification tasks, which consists of the operations of rule generation, rule simplification and rule representation. This thesis also stresses the importance of combination of different rule learning algorithms through ensemble learning approaches. For the three operations mentioned above, novel approaches are developed and validated by comparing with existing ones for advancing the performance of using this framework. In particular, for rule generation, Information Entropy Based Rule Generation is developed and validated through comparing with Prism. For rule simplification, Jmid-pruning is developed and validated through comparing with J-pruning and Jmax-pruning. For rule representation, rule based network is developed and validated through comparing with decision tree and linear list. The results show that the novel approaches complement well the existing ones in terms of accuracy, efficiency and interpretability. On the other hand, this thesis introduces ensemble learning approaches that involve collaborations in training or testing stage through combination of learning algorithms or models. In particular, the novel framework Collaborative and Competitive Random Decision Rules is created and validated through comparing with Random Prisms. This thesis also introduces the other novel framework Collaborative Rule Generation which involves collaborations in training stage through combination of multiple learning algorithms. This framework is validated through comparing with each individual algorithm. In addition, this thesis shows that the above two frameworks can be combined as a hybrid ensemble learning framework toward advancing overall performance of classification. This hybrid framework is validated through comparing with Random Forests. Finally, this thesis summarises the research contributions in terms of theoretical significance, practical importance, methodological impact and philosophical aspects. In particular, theoretical significance includes creation of the framework for design of rule based systems and development of novel approaches relating to rule based classification. Practical importance shows the usefulness in knowledge discovery and predictive modelling and the independency in application domains and platforms. Methodological impact shows the advances in generation, simplification and representation of rules. Philosophical aspects include the novel understanding of data mining and machine learning in the context of human research and learning, and the inspiration from information theory, system theory and control theory toward methodological innovations. On the basis of the completed work, this thesis provides suggestions regarding further directions toward advancing this research area.
APA, Harvard, Vancouver, ISO, and other styles
36

Testi, Enrico. "Machine Learning for User Traffic Classification in Wireless Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
With the advent of Internet of Things telecommunications will play a crucial role in every day life. The rapidly growing demand for radio services by millions of user all over the world will make the radio spectrum an increasingly valuable resource. The modern standards of communications provide a static utilization of the radio spectrum resources, which results in its under-utilization. Therefore let us imagine a dynamic sharing of the radio resources, where every device can use a portion of such resources if and only if they are not utilized yet. In this regard, the Federal Communication Commission (FCC), the authority that regulatizes specturm sharing in the USA, has decided to free some portions of the radio spectrum in order to allow its dynamic usage. From this perspective, devices will have to probe the RF scene in time, space and frequency domain to ensure that a well defined portion of the spectrum is free, making multidimensional spectrum analysis mandatory. On large scale infrastructures indeed, the classification of trasmissions, the spatial localization of the events and the search for spectrum holes, might be done with an extensive use of machine learning algorithms. Traffic classification allows to automatically recognize the user-level application that has generated a given stream of packets from direct observation of the packets or from the spectrum occupancy. An in-depth knowledge of the composition of traffic, as well as the identification of trends in application usage, may help operators improving network design and provisioning. Moreover, traffic classification represents the first step in the direction of activities such as anomaly detection for the identification of malicious use of network resources, and for security operation such as firewalling and filtering of unwanted traffic. This work proposes a machine learning approach for wireless traffic classification in common bands, such as WiFi, with low-cost measurement devices.
APA, Harvard, Vancouver, ISO, and other styles
37

Berral, García Josep Lluís. "Improved self-management of datacenter systems applying machine learning." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134360.

Full text
Abstract:
Autonomic Computing is a Computer Science and Technologies research area, originated during mid 2000's. It focuses on optimization and improvement of complex distributed computing systems through self-control and self-management. As distributed computing systems grow in complexity, like multi-datacenter systems in cloud computing, the system operators and architects need more help to understand, design and optimize manually these systems, even more when these systems are distributed along the world and belong to different entities and authorities. Self-management lets these distributed computing systems improve their resource and energy management, a very important issue when resources have a cost, by obtaining, running or maintaining them. Here we propose to improve Autonomic Computing techniques for resource management by applying modeling and prediction methods from Machine Learning and Artificial Intelligence. Machine Learning methods can find accurate models from system behaviors and often intelligible explanations to them, also predict and infer system states and values. These models obtained from automatic learning have the advantage of being easily updated to workload or configuration changes by re-taking examples and re-training the predictors. So employing automatic modeling and predictive abilities, we can find new methods for making "intelligent" decisions and discovering new information and knowledge from systems. This thesis departs from the state of the art, where management is based on administrators expertise, well known data, ad-hoc studied algorithms and models, and elements to be studied from computing machine point of view; to a novel state of the art where management is driven by models learned from the same system, providing useful feedback, making up for incomplete, missing or uncertain data, from a global network of datacenters point of view. - First of all, we cover the scenario where the decision maker works knowing all pieces of information from the system: how much will each job consume, how is and will be the desired quality of service, what are the deadlines for the workload, etc. All of this focusing on each component and policy of each element involved in executing these jobs. -Then we focus on the scenario where instead of fixed oracles that provide us information from an expert formula or set of conditions, machine learning is used to create these oracles. Here we look at components and specific details while some part of the information is not known and must be learned and predicted. - We reduce the problem of optimizing resource allocations and requirements for virtualized web-services to a mathematical problem, indicating each factor, variable and element involved, also all the constraints the scheduling process must attend to. The scheduling problem can be modeled as a Mixed Integer Linear Program. Here we face an scenario of a full datacenter, further we introduce some information prediction. - We complement the model by expanding the predicted elements, studying the main resources (this is CPU, Memory and IO) that can suffer from noise, inaccuracy or unavailability. Once learning predictors for certain components let the decision making improve, the system can become more ¿expert-knowledge independent¿ and research can focus on an scenario where all the elements provide noisy, uncertainty or private information. Also we introduce to the management optimization new factors as for each datacenter context and costs may change, turning the model as "multi-datacenter" - Finally, we review of the cost of placing datacenters depending on green energy sources, and distribute the load according to green energy availability.
APA, Harvard, Vancouver, ISO, and other styles
38

Sonal, Manish. "Machine Learning for PAPR Distortion Reduction in OFDM Systems." Thesis, KTH, Signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197682.

Full text
Abstract:
The purpose of the project is to investigate the possibility of using modern machine learning to model nonlinear analog devices like the Power Amplifier (PA), and study the feasibility of using such models in wireless systems design. Orthogonal frequency division multiplexing (OFDM) is one of the most prominent modulation technique used in several standards like 802.11a,802.11n, 802.11ac and more. Telecommunication systems like LTE, LTE/Aand WiMAX are also based on OFDM. Nevertheless, OFDM system shows high peak to average power (PAPR) in time domain because it comprises of many subcarriers added via inverse fast Fourier transform(IFFT). HighPAPR results in an increased symbol error rate, while degrading the efficiency of the PA. Digital predistortion (DPD) still suffers from high symbol error rate (SER) and reduced PA efficiency, when there is an increase in peak back off(PBO). A receiver based nonlinearity distortion reduction approach can be justified by the fact that base stations have high computation power. A iterative-decision-feedback mitigation technique can be implemented as a receiver side compensation assuming memoryless PA nonlinearities. For successful distortion reduction the iterative-decision based technique required the knowledge of the transmitter PA. The author proposes to identify the nonlinear PA model using machine learning techniques like nonlinear regression and deep learning. The results show promising improvement in SER reduction with small PA model learning time.
Syftet med detta projekt är att undersöka möjligheterna att använda modernmaskininlärning för att beskriva ickelinjära analoga enheter såsom effektförstärkareoch att studera hur användbart det är att använda sådana modeller föratt designa trådlösa kommunikationssystem. OFDM (ortogonal frekvensmultiplex)är en av de vanligast förekommande modulationsteknikerna, som användsi standarder såsom 802.11a, 802.11n, 802.11ac and andra. Telekommunikationssystemsom LTE, LTE/A och WiMAX baseras också på OFDM. Dock resulterarOFDM i hög toppeffekt i förhållande till medeleffekten (hög PAPR) i tidsdomänen,eftersom signalen består av många delkanaler som summeras mha inversdiskret fouriertransform (IFFT). En hög PAPR resulterar i ökad symbolfelshaltoch försämrar effektiviteten hos effektförstärkaren. Digital predistortion (DPD)kan förbättra situationen men ger fortfarande hög symbolfelshalt och försämradförstärkareffektivitet, när man drar ned sändeffekten för undvika kvarvarandeickelineariteter. Att minska förvrängningen från ickelineariteterna vid mottagarenkan motiveras i system där basstationerna har hög beräkningsförmåga. Enmetod för att reducera förvrängningarna kan implementeras på mottagarsidan,baserad på iterativ beslutsåterkoppling, under antagandet om att sändarens effektförstärkarehar en minneslös ickelinearitet. För att störningsreduceringenska fungera väl, krävs god kunskap om sändarens effektförstärkare. Författarenföreslår att identifiera en ickelinjär modell för förstärkaren mha maskininlärningstekniker,såsom ickelinjär regression och djup inlärning. Resultaten visarlovande förbättringar av symbolfelshalten med en låg inlärningstid för förstärkarmodellen.
APA, Harvard, Vancouver, ISO, and other styles
39

Maglaras, Leandros. "Intrusion detection in SCADA systems using machine learning techniques." Thesis, University of Huddersfield, 2018. http://eprints.hud.ac.uk/id/eprint/34578/.

Full text
Abstract:
Modern Supervisory Control and Data Acquisition (SCADA) systems are essential for monitoring and managing electric power generation, transmission and distribution. In the age of the Internet of Things, SCADA has evolved into big, complex and distributed systems that are prone to conventional in addition to new threats. So as to detect intruders in a timely and efficient manner a real time detection mechanism, capable of dealing with a range of forms of attacks is highly salient. Such a mechanism has to be distributed, low cost, precise, reliable and secure, with a low communication overhead, thereby not interfering in the industrial system’s operation. In this commentary two distributed Intrusion Detection Systems (IDSs) which are able to detect attacks that occur in a SCADA system are proposed, both developed and evaluated for the purposes of the CockpitCI project. The CockpitCI project proposes an architecture based on real-time Perimeter Intrusion Detection System (PIDS), which provides the core cyber-analysis and detection capabilities, being responsible for continuously assessing and protecting the electronic security perimeter of each CI. Part of the PIDS that was developed for the purposes of the CockpitCI project, is the OCSVM module. During the duration of the project two novel OCSVM modules were developed and tested using datasets from a small-scale testbed that was created, providing the means to mimic a SCADA system operating both in normal conditions and under the influence of cyberattacks. The first method, namely K-OCSVM, can distinguish real from false alarms using the OCSVM method with default values for parameters ν and σ combined with a recursive K-means clustering method. The K-OCSVM is very different from all similar methods that required pre-selection of parameters with the use of cross-validation or other methods that ensemble outcomes of one class classifiers. Building on the K-OCSVM and trying to cope with the high requirements that were imposed from the CockpitCi project, both in terms of accuracy and time overhead, a second method, namely IT-OCSVM is presented. IT-OCSVM method is capable of performing outlier detection with high accuracy and low overhead within a temporal window, adequate for the nature of SCADA systems. The two presented methods are performing well under several attack scenarios. Having to balance between high accuracy, low false alarm rate, real time communication requirements and low overhead, under complex and usually persistent attack situations, a combination of several techniques is needed. Despite the range of intrusion detection activities, it has been proven that half of these have human error at their core. An increased empirical and theoretical research into human aspects of cyber security based on the volumes of human error related incidents can enhance cyber security capabilities of modern systems. In order to strengthen the security of SCADA systems, another solution is to deliver defence in depth by layering security controls so as to reduce the risk to the assets being protected.
APA, Harvard, Vancouver, ISO, and other styles
40

Grzeidak, Emerson. "Identification of nonlinear systems based on extreme learning machine." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/21603.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Programa de Pós-Graduação em Sistemas Mecatrônicos, 2016.
Submitted by Camila Duarte (camiladias@bce.unb.br) on 2016-09-14T17:33:55Z No. of bitstreams: 1 2016_EmersonGrzeidak.pdf: 5274560 bytes, checksum: 0f649b217c325601c125fad908bc164f (MD5)
Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2016-10-21T18:14:35Z (GMT) No. of bitstreams: 1 2016_EmersonGrzeidak.pdf: 5274560 bytes, checksum: 0f649b217c325601c125fad908bc164f (MD5)
Made available in DSpace on 2016-10-21T18:14:35Z (GMT). No. of bitstreams: 1 2016_EmersonGrzeidak.pdf: 5274560 bytes, checksum: 0f649b217c325601c125fad908bc164f (MD5)
O presente trabalho considera o problema de identificação de sistemas não-lineares comestrutura incerta na presença de distúrbios limitados. Dado a estrutura incerta do sistema, a estimação dos estados é baseada em redes neurais com uma camada escondida e então, para assegurar a convergência dos erros residuais de estimação dos estados para zero, as leis de aprendizagem são projetadas usando a teoria de estabilidade de Lyapunov e resultados já disponíveis na teoria de controle adaptativo. Primeiramente, um esquema de identificação usando aprendizagem extrema é apresentado. O modelo proposto assegura a convergência dos erros residuais de estimação dos estados para zero e a limitação de todos os demais erros e distúrbios. Usando o lema de Barbalat e uma análise tipo Lyapunov, é empregado um modelo de rede neural dinâmica com uma camada escondida (SHLNN) gerada aleatoriamente para assegurar as propriedades supramencionadas. Dessa maneira, assegura-se uma convergência mais rápida e melhor eficiência computacional do que os modelos SHLNN convencionais. Além disso, com algumas modificações que envolvem a seleção da função ativação e a estrutura do vetor regressor, o algoritmo proposto pode ser aplicado para qualquer rede neural parametrizável linearmente. Em seguida, como uma extensão da metodologia proposta, um modelo de rede neural com uma camada escondida e parametrizável não-linearmente (SHLNN) é estudado. Os pesos da camada escondida e de saída são ajustados simultaneamente por leis adaptativas robustas obtidas através da teoria de estabilidade de Lyapunov. O segundo esquema também assegura a convergência dos erros residuais de estimação dos estados para zero e a limitação de todos os demais erros de aproximação associados, mesmo na presença de erros de aproximação e distúrbios. Adicionalmente, como no primeiro esquema, não é necessário conhecimento prévio sobre os pesos ideias, erros de aproximação ou distúrbios. Simulações extensivas para a validação dos resultados teóricos e demonstração dos métodos propostos são fornecidos. _________________________________________________________________________________________________ ABSTRACT
The present research work considers the identification problem of nonlinear systems with uncertain structure and in the presence of bounded disturbances. Given the uncertain structure of the system, the state estimation is based on single-hidden layer neural networks and then, to ensure the convergence of the state estimation residual errors to zero, the learning laws are designed using the Lyapunov stability theory and already available results in adaptive control theory. First, an identification scheme via extreme learning machine neural network is developed. The proposed model ensures the convergence of the state estimation residual errors to zero and boundedness of all associated approximation errors, even in the presence of approximation error and disturbances. Lyapunov-like analysis using Barbalat’s Lemma and a dynamic single-hidden layer neural network (SHLNN) model with hidden nodes randomly generated to establish the aforementioned properties are employed. Hence, faster convergence and better computational efficiency than conventional SHLNNs is assured. Furthermore, with a few modifications regarding the selection of activation function and the regressor vector’s structure, the proposed algorithm can be applied to any linearly parameterized neural network model. Next, as an extension of the proposed methodology, a nonlinearly parameterized single-hidden layer neural network model (SHLNN) is studied. The hidden and output weights are simultaneously adjusted by robust adaptive laws that are designed via Lyapunov stability theory. The second scheme also ensures the convergence of the state estimation residual errors to zero and boundedness of all associated approximation errors, even in the presence of approximation error and disturbances. Additionally, as in the first scheme, it is not necessary any previous knowledge about the ideal weights, approximation error and disturbances. Extensive simulations to validate the theoretical results and show the effectiveness of the two proposed methods are also provided.
APA, Harvard, Vancouver, ISO, and other styles
41

Barkrot, Felicia, and Mathias Berggren. "Using machine learning for control systems in transforming environments." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166984.

Full text
Abstract:
The development of computational power is constantly on the rise and makes for new possibilities in a lot of areas. Two of the areas that has made great progress thanks to this development are control theory and artificial intelligence. The most eminent area of artificial intelligence is machine learning. The difference between an environment controlled by control theory and an environment controlled by machine learning is that the machine learning model will adapt in order to achieve a goal while the classic model needs preset parameters. This supposedly makes the machine learning model more optimal for an environment which changes over time. This theory is tested in this paper on a model of an inverted pendulum. Three different machine learning algorithms are compared to a classic model based on control theory. Changes are made to the model and the adaptability of the machine learning algorithms are tested. As a result one of the algorithms were able to mimic the classic model but with different accuracy. When changes were made to the environments the result showed that only one of the algorithms were able to adapt and achieve balance.
APA, Harvard, Vancouver, ISO, and other styles
42

ANSARI, NAZLI. "MACHINE LEARNING METHODS TO IMPROVE NETWORK INTRUSION DETECTION SYSTEMS." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/theses/2605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Adurti, Devi Abhiseshu, and Mohit Battu. "Optimization of Heterogeneous Parallel Computing Systems using Machine Learning." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21834.

Full text
Abstract:
Background: Heterogeneous parallel computing systems utilize the combination of different resources CPUs and GPUs to achieve high performance and, reduced latency and energy consumption. Programming applications that target various processing units requires employing different tools and programming models/languages. Furthermore, selecting the most optimal implementation, which may either target different processing units (i.e. CPU or GPU) or implement the various algorithms, is not trivial for a given context. In this thesis, we investigate the use of machine learning to address the selection problem of various implementation variants for an application running on a heterogeneous system. Objectives: This study is focused on providing an approach for optimization of heterogeneous parallel computing systems at runtime by building the most efficient machine learning model to predict the optimal implementation variant of an application. Methods: The six machine learning models KNN, XGBoost, DTC, Random Forest Classifier, LightGBM, and SVM are trained and tested using stratified k-fold on the dataset generated from the matrix multiplication application for square matrix input dimension ranging from 16x16 to 10992x10992. Results: The results of each machine learning algorithm’s finding are presented through accuracy, confusion matrix, classification report for parameters precision, recall, and F-1 score, and a comparison between the machine learning models in terms of accuracy, run-time training, and run-time prediction are provided to determine the best model. Conclusions: The XGBoost, DTC, SVM algorithms achieved 100% accuracy. In comparison to the other machine learning models, the DTC is found to be the most suitable due to its low time required for training and prediction in predicting the optimal implementation variant of the heterogeneous system application. Hence the DTC is the best suitable algorithm for the optimization of heterogeneous parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
44

Beale, Dan. "Autonomous visual learning for robotic systems." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558886.

Full text
Abstract:
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of dierent poses, giving a good foundation to build a robust classication model. Contributions include a new segmentation algorithm, a new classication model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles
45

Kostiadis, Kostas. "Learning to co-operate in multi-agent systems." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kanwar, John. "Smart cropping tools with help of machine learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74827.

Full text
Abstract:
Machine learning has been around for a long time, the applications range from a big variety of different subjects, everything from self driving cars to data mining. When a person takes a picture with its mobile phone it easily happens that the photo is a little bit crooked. It does also happen that people takes spontaneous photos with help of their phones, which can result in something irrelevant ending up in the corner of the image. This thesis combines machine learning with photo editing tools. It will explore the possibilities how machine learning can be used to automatically crop images in an aesthetically pleasing way and how machine learning can be used to create a portrait cropping tool. It will also go through how a straighten out function can be implemented with help of machine learning. At last, it is going to compare this tools with other software automatic cropping tools.
Maskinlärning har funnits en lång tid. Deras jobb varierar från flera olika ämnen. Allting från självkörande bilar till data mining. När en person tar en bild med en mobiltelefon händer det lätt att bilden är lite sned. Det händer också att en tar spontana bilder med sin mobil, vilket kan leda till att det kommer med något i kanten av bilden som inte bör vara där. Det här examensarbetet kombinerar maskinlärning med fotoredigeringsverktyg. Det kommer att utforska möjligheterna hur maskinlärning kan användas för att automatiskt beskära bilder estetsikt tilltalande samt hur maskinlärning kan användas för att skapa ett porträttbeskärningsverktyg. Det kommer även att gå igenom hur en räta-till-funktion kan bli implementerad med hjälp av maskinlärning. Till sist kommer det att jämföra dessa verktyg med andra programs automatiska beskärningsverktyg.
APA, Harvard, Vancouver, ISO, and other styles
47

Drugowitsch, Jan. "Learning classifier systems from first principles : a probabilistic reformulation of learning classifier systems from the perspective of machine learning." Thesis, University of Bath, 2007. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500684.

Full text
Abstract:
Learning Classifier Systems (LCS) are a family of rule-based machine learning methods. They aim at the autonomous production of potentially human readable results that are the most compact generalised representation whilst also maintaining high predictive accuracy, with a wide range of application areas, such as autonomous robotics, economics, and multi-agent systems. Their design is mainly approached heuristically and, even though their performance is competitive in regression and classification tasks, they do not meet their expected performance in sequential decision tasks despite being initially designed for such tasks. It is out contention that improvement is hindered by a lack of theoretical understanding of their underlying mechanisms and dynamics.
APA, Harvard, Vancouver, ISO, and other styles
48

Panholzer, Georg. "Identifying Deviating Systems with Unsupervised Learning." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1146.

Full text
Abstract:

We present a technique to identify deviating systems among a group of systems in a

self-organized way. A compressed representation of each system is used to compute similarity measures, which are combined in an affinity matrix of all systems. Deviation detection and clustering is then used to identify deviating systems based on this affinity matrix.

The compressed representation is computed with Principal Component Analysis and

Kernel Principal Component Analysis. The similarity measure between two compressed

representations is based on the angle between the spaces spanned by the principal

components, but other methods of calculating a similarity measure are suggested as

well. The subsequent deviation detection is carried out by computing the probability of

each system to be observed given all the other systems. Clustering of the systems is

done with hierarchical clustering and spectral clustering. The whole technique is demonstrated on four data sets of mechanical systems, two of a simulated cooling system and two of human gait. The results show its applicability on these mechanical systems.

APA, Harvard, Vancouver, ISO, and other styles
49

Hellborg, Per. "Optimering av datamängder med Machine learning : En studie om Machine learning och Internet of Things." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13747.

Full text
Abstract:
This report is about how an Internet of Things (IoT) optimization can be done with Machine learning (ML). The IoT- devices in this report are sensors in containers that read how full the containers are. The report contains a case from Sogeti. Were a client can use this optimization to get better routes for their garbage truck, with this solution the garbage trucks will only go to full containers and be able to skip empty or close to empty containers. This will result in less fuel costs and a better environment. This solution can be used for every industry that needs a route optimization. To do this there must first be understanding for what IoT is and what is possible to do with it then there need to be understanding about ML. The report cover these parts and tell how the method Design science (DS) is being used to produce this solution and some information about the method. This project also works agile with iterations under the implementation stage in DS. On the ML part there is an argumentation of a comparison of witch algorithm should be used. There are two candidates: Hill- Climbing and K-means Cluster. For this solution K-means cluster will be the one being used. K-means clustering is an unsupervised algorithm that doesn’t need practice data, it pairs data that are very similar and builds clusters. It will do this with full containers and build clusters with the ones that have similar coordinates so the full containers are close to each other. When this is done the clusters exports to a database and then there is a brief description on how its possible to make a map that makes a route between the containers in the cluster.
APA, Harvard, Vancouver, ISO, and other styles
50

Tataru, Augustin. "Metrics for Evaluating Machine Learning Cloud Services." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-37882.

Full text
Abstract:
Machine Learning (ML) is nowadays being offered as a service by several cloud providers. Consumers require metrics to be able to evaluate and compare between multiple ML cloud services. There aren’t many established metrics that can be used specifically for these types of services. In this paper, the Goal-QuestionMetric paradigm is used to define a set of metrics applicable for ML cloud services. The metrics are created based on goals expressed by professionals who use or are interested in using these services. At the end, a questionnaire is used to evaluate the metrics based on two criteria: relevance and ease of use.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography