To see the other types of publications on this topic, follow the link: Artificial intelligence and machine learning.

Dissertations / Theses on the topic 'Artificial intelligence and machine learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Artificial intelligence and machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

林謀楷 and Mau-kai Lam. "Inductive machine learning with bias." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Forsman, Robin, and Jimmy Jönsson. "Artificial intelligence and Machine learning : a diabetic readmission study." Thesis, Högskolan Kristianstad, Avdelningen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-19412.

Full text
Abstract:
The maturing of Artificial intelligence provides great opportunities for healthcare, but also comes with new challenges. For Artificial intelligence to be adequate a comprehensive analysis of the data is necessary along with testing the data in multiple algorithms to determine which algorithm is appropriate to use. In this study collection of data has been gathered that consists of patients who have either been readmitted or not readmitted to hospital within 30-days after being admitted. The data has then been analyzed and compared in different algorithms to determine the most appropriate algorithm to use.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Sixiao. "Classifier Privacy in Machine Learning Markets." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586460332748024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Yibiao. "Statistical methods with application to machine learning and artificial intelligence." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44730.

Full text
Abstract:
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.
APA, Harvard, Vancouver, ISO, and other styles
5

Conway, Jennifer (Jennifer Elizabeth). "Artificial intelligence and machine learning : current applications in real estate." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120609.

Full text
Abstract:
Thesis: S.M. in Real Estate Development, Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 113-117).
Real estate meets machine learning: real contribution or just hype? Creating and managing the built environment is a complicated task fraught with difficult decisions, challenging relationships, and a multitude of variables. Today's technology experts are building computers and software that can help resolve many of these challenges, some of them using what is broadly called artificial intelligence and machine learning. This thesis will define machine learning and artificial intelligence for the investor and real estate audience, examine the ways in which these new analytic, predictive, and automating technologies are being used in the real estate industry, and postulate potential future applications and associated challenges. Machine learning and artificial intelligence can and will be used to facilitate real estate investment in myriad ways, spanning all aspects of the real estate profession -- from property management, to investment decisions, to development processes -- transforming real estate into a more efficient and data-driven industry.
by Jennifer Conway.
S.M. in Real Estate Development
APA, Harvard, Vancouver, ISO, and other styles
6

Carlucci, Lorenzo. "Some cognitively-motivated learning paradigms in Algorithmic Learning Theory." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.68 Mb., p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rose, Lydia M. "Modernizing Check Fraud Detection with Machine Learning." Thesis, Utica College, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=13421455.

Full text
Abstract:

Even as electronic payments and virtual currencies become more popular, checks are still the nearly ubiquitous form of payment for many situations in the United States such as payroll, purchasing a vehicle, paying rent, and hiring a contractor. Fraud has always plagued this form of payment, and this research aimed to capture the scope of this 15th century problem in the 21st century. Today, counterfeit checks originating from overseas are the scourge of online dating sites, classifieds forums, and mailboxes throughout the country. Additional frauds including alteration, theft, and check kiting also exploit checks. Check fraud is causing hundreds of millions in estimated losses to both financial institutions and consumers annually, and the problem is growing. Fraud investigators and financial institutions must be better educated and armed to successfully combat it. This research study collected information on the history of checks, forms of check fraud, victimization, and methods for check fraud prevention and detection. Check fraud is not only a financial issue, but also a social one. Uneducated and otherwise vulnerable consumers are particularly targeted by scammers exploiting this form of fraud. Racial minorities, elderly, mentally ill, and those living in poverty are disproportionately affected by fraud victimization. Financial institutions struggle to strike a balance between educating customers, complying with regulations, and tailoring alerts that are both valuable and fast. Applications of artificial intelligence including machine learning and computer vision have many recent advancements, but financial institution anti-fraud measures have not kept pace. This research concludes that the onus rests on financial institutions to take a modern approach to check fraud, incorporating machine learning into real-time reviews, to adequately protect victims.

APA, Harvard, Vancouver, ISO, and other styles
8

Townsend, Larry. "Wireless Sensor Network Clustering with Machine Learning." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1042.

Full text
Abstract:
Wireless sensor networks (WSNs) are useful in situations where a low-cost network needs to be set up quickly and no fixed network infrastructure exists. Typical applications are for military exercises and emergency rescue operations. Due to the nature of a wireless network, there is no fixed routing or intrusion detection and these tasks must be done by the individual network nodes. The nodes of a WSN are mobile devices and rely on battery power to function. Due the limited power resources available to the devices and the tasks each node must perform, methods to decrease the overall power consumption of WSN nodes are an active research area. This research investigated using genetic algorithms and graph algorithms to determine a clustering arrangement of wireless nodes that would reduce WSN power consumption and thereby prolong the lifetime of the network. The WSN nodes were partitioned into clusters and a node elected from each cluster to act as a cluster head. The cluster head managed routing tasks for the cluster, thereby reducing the overall WSN power usage. The clustering configuration was determined via genetic algorithm and graph algorithms. The fitness function for the genetic algorithm was based on the energy used by the nodes. It was found that the genetic algorithm was able to cluster the nodes in a near-optimal configuration for energy efficiency. Chromosome repair was also developed and implemented. Two different repair methods were found to be successful in producing near-optimal solutions and reducing the time to reach the solution versus a standard genetic algorithm. It was also found the repair methods were able to implement gateway nodes and energy balance to further reduce network energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
9

Abdul-hadi, Omar. "Machine Learning Applications to Robot Control." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817183.

Full text
Abstract:

Control of robot manipulators can be greatly improved with the use of velocity and torque feedforward control. However, the effectiveness of feedforward control greatly relies on the accuracy of the model. In this study, kinematics and dynamics analysis is performed on a six axis arm, a Delta2 robot, and a Delta3 robot. Velocity feedforward calculation is performed using the traditional means of using the kinematics solution for velocity. However, a neural network is used to model the torque feedforward equations. For each of these mechanisms, we first solve the forward and inverse kinematics transformations. We then derive a dynamic model. Later, unlike traditional methods of obtaining the dynamics parameters of the dynamics model, the dynamics model is used to infer dependencies between the input and output variables for neural network torque estimation. The neural network is trained with joint positions, velocities, and accelerations as inputs, and joint torques as outputs. After training is complete, the neural network is used to estimate the feedforward torque effort. Additionally, an investigation is done on the use of neural networks for deriving the inverse kinematics solution of a six axis arm. Although the neural network demonstrated outstanding ability to model complex mathematical equations, the inverse kinematics solution was not accurate enough for practical use.

APA, Harvard, Vancouver, ISO, and other styles
10

Cox, Michael Thomas. "Introspective multistrategy learning : constructing a learning strategy under reasoning failure." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/10074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lam, Mau-kai. "Inductive machine learing with bias /." Hong Kong : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13972558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Goodman, Genghis. "A Machine Learning Approach to Artificial Floorplan Generation." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/89.

Full text
Abstract:
The process of designing a floorplan is highly iterative and requires extensive human labor. Currently, there are a number of computer programs that aid humans in floorplan design. These programs, however, are limited in their inability to fully automate the creative process. Such automation would allow a professional to quickly generate many possible floorplan solutions, greatly expediting the process. However, automating this creative process is very difficult because of the many implicit and explicit rules a model must learn in order create viable floorplans. In this paper, we propose a method of floorplan generation using two machine learning models: a sequential model that generates rooms within the floorplan, and a graph-based model that finds adjacencies between generated rooms. Each of these models can be altered such that they are each capable of producing a floorplan independently; however, we find that the combination of these models outperforms each of its pieces, as well as a statistic-based approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Kostias, Aristotelis, and Georgios Tagkoulis. "Development of an Artificial Intelligent Software Agent using Artificial Intelligence and Machine Learning Techniques to play Backgammon Variants." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251923.

Full text
Abstract:
Artificial Intelligence has seen enormous progress in many disciplines in the recent years. Particularly, digitalized versions of board games require artificial intelligence application due to their complex decision-making environment. Game developers aim to create board game software agents which are intelligent, adaptive and responsive. However, the process of designing and developing such a software agent is far from straight forward due the nature and diversity of each game. The thesis examines and presents a detailed procedure of constructing a software agent for backgammon variants, using temporal difference, artificial neural networks and backpropagation. Different artificial intelligence and machine learning algorithms used in board games, are overviewed and presented. Finally, the thesis describes the development and implementation of a software agent for the backgammon variant called Swedish Tables and evaluates its performance.
Artificiell intelligens har sett enorma framsteg inom många discipliner de senare åren. Speciellt, digitaliserade brädspel kräver implementering av Artificiell intelligens då deras beslutfattande logik är väldigt komplex. Dataspelutvecklarnas syfte och mål är att skapa programvaror som är intelligenta, adaptiva och lyhörda. Dock konstruktionsoch utvecklingsprocess för att kunna skapa en sådan mjukvara är långtifrån att vara faställd, mest på grund av diversitet av naturen av varje spel. Denna avhandlingen forskar och föreslår en detaljerad procedur för att bygga en "Software Agent" för olika slags Backagammon, genom att använda AI neurala nätvärk och back-propagation metoder. Olika artificiell intelligensoch maskininlärningsalgoritmer som används i brädspel forskas och presenteras. Slutligen denna avhandling beskriver implementeringen och utvecklingen av ett mjukvaru agent för en backgammonvariant, närmare bestämt av "Svenska Tabeller" samt utvärderar dess prestanda.
APA, Harvard, Vancouver, ISO, and other styles
14

Xu, Huan. "Robust decision making and its applications in machine learning." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66905.

Full text
Abstract:
Decision making formulated as finding a strategy that maximizes a utility function depends critically on knowing the problem parameters precisely. The obtained strategy can be highly sub-optimal and/or infeasible when parameters are subject to uncertainty, a typical situation in practice. Robust optimization, and more generally robust decision making, addresses this issue by treating uncertain parameters as an arbitrary element of a pre-defined set and solving solutions based on a worst-case analysis. In this thesis we contribute to two closely related fields of robust decision making. First, we address two limitations of robust decision making. Namely, a lack of theoretical justification and conservatism in sequential decision making. Specifically, we provide an axiomatic justification of robust optimization based on the MaxMin Expected Utility framework from decision theory. Furthermore, we propose three less conservative decision criteria for sequential decision making tasks, which include: (1) In uncertain Markov decision processes we propose an alternative formulation of the parameter uncertainty -- the nested-set structured parameter uncertainty -- and find the strategy that achieves maxmin expected utility to mitigate the conservatism of the standard robust Markov decision processes. (2) We investigate uncertain Markov decision processes where each strategy is evaluated comparatively by its gap to the optimum value. Two formulations, namely minimax regret and mean-variance tradeoff of the regret, were proposed and their computational cost studied. (3) We propose a novel Kalman filter design based on trading-off the likely performance and the robustness under parameter uncertainty. Second, we apply robust decision making into machine learning both theoretically and algorithmically. Specifically, on the theoretical front, we show that the concept of robustness is essential to ''successful'' learning
La prise de décision, formulée comme trouver une stratégie qui maximise une fonction de l'utilité, dépend de manière critique sur la connaissance précise des paramètres du problem. La stratégie obtenue peut être très sous-optimale et/ou infeasible quand les paramètres sont subjets à l'incertitude – une situation typique en pratique. L'optimisation robuste, et plus genéralement, la prise de décision robuste, vise cette question en traitant le paramètre incertain comme un élement arbitraire d'un ensemble prédéfini et en trouvant une solution en suivant l'analyse du pire scénario. Dans cette thèse, nous contribuons envers deux champs intimement reliés et appartenant à la prise de décision robuste. En premier lieu, nous considérons deux limites de la prise de décision robuste: le manque de justification théorique et le conservatism dans la prise de décision séquentielle. Pour être plus spécifique, nous donnons une justifiquation axiomatique de l'optimisation robuste basée sur le cadre de l'utilité espérée MaxMin de la théorie de la prise de décision. De plus, nous proposons trois critères moins conservateurs pour la prise de décision séquentielle, incluant: (1) dans les processus incertains de décisionde Markov, nous proposons un modèle alternative de l'incertitude de paramètres –l'incertitude structurée comme des ensembles emboîtées – et trouvons une stratégie qui obtient une utilité espérée maxmin pour mitiguer le conservatisme des processus incertains de décision de Markov qui sont de norme. (2) Nous considérons les processus incertains de décision de Markov où chaque stratégie est évaluée par comparaison de l'écart avec l'optimum. Deux modèles – le regret minimax et le compromis entre l'espérance et la variance du regret – sont présentés et leurs complexités étudiées. (3)Nous proposons une nouvelle conception de filtre de Kalman b
APA, Harvard, Vancouver, ISO, and other styles
15

Le, Truc Duc. "Machine Learning Methods for 3D Object Classification and Segmentation." Thesis, University of Missouri - Columbia, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877153.

Full text
Abstract:

Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation.

The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset.

The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation.

APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Hsinchun. "Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/106427.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to “intelligent” information retrieval and indexing. More recently, information science researchers have turned to other newer artificial-intelligence- based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. These newer techniques, which are grounded on diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information storage and retrieval systems. In this article, we first provide an overview of these newer techniques and their use in information science research. To familiarize readers with these techniques, we present three popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evolution- based genetic algorithms. We discuss their knowledge representations and algorithms in the context of information retrieval. Sample implementation and testing results from our own research are also provided for each technique. We believe these techniques are promising in their ability to analyze user queries, identify users’ information needs, and suggest alternatives for search. With proper user-system interactions, these methods can greatly complement the prevailing full-text, keywordbased, probabilistic, and knowledge-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Machado, Beatriz. "Artificial intelligence to model bedrock depth uncertainty." Thesis, KTH, Jord- och bergmekanik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252317.

Full text
Abstract:
The estimation of bedrock level for soil and rock engineering is a challenge associated to many uncertainties. Nowadays, this estimation is performed by geotechnical or geophysics investigations. These methods are expensive techniques, that normally are not fully used because of limited budget. Hence, the bedrock levels in between investigations are roughly estimated and the uncertainty is almost unknown. Machine learning (ML) is an artificial intelligence technique that uses algorithms and statistical models to predict determined tasks. These mathematical models are built dividing the data between training, testing and validation samples so the algorithm improve automatically based on passed experiences. This thesis explores the possibility of applying ML to estimate the bedrock levels and tries to find a suitable algorithm for the prediction and estimation of the uncertainties. Many diferent algorithms were tested during the process and the accuracy level was analysed comparing with the input data and also with interpolation methods, like Kriging. The results show that Kriging method is capable of predicting the bedrock surface with considerably good accuracy. However, when is necessary to estimate the prediction interval (PI), Kriging presents a high standard deviation. The machine learning presents a bedrock surface almost as smooth as Kriging with better results for PI. The Bagging regressor with decision tree was the algorithm more capable of predicting an accurate bedrock surface and narrow PI.
BIG and BeFo project "Rock and ground water including artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
18

Stroulia, Eleni. "Failure-driven learning as model-based self-redesign." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/8291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Turner, Jonathan Milton. "Obstacle avoidance and path traversal using interactive machine learning /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1905.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Full text
Abstract:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
21

Michael, Christoph Cornelius. "General methods for analyzing machine learning sample complexity." W&M ScholarWorks, 1994. https://scholarworks.wm.edu/etd/1539623860.

Full text
Abstract:
During the past decade, there has been a resurgence of interest in applying mathematical methods to problems in artificial intelligence. Much work has been done in the field of machine learning, but it is not always clear how the results of this research should be applied to practical problems. Our aim is to help bridge the gap between theory and practice by addressing the question: "If we are given a machine learning algorithm, how should we go about formally analyzing it?" as opposed to the usual question: "how do we write a learning algorithm we can analyze?".;We will consider algorithms that accept randomly drawn training data as input, and produce classification rules as their outputs. For the most part our analyses will be based on the syntactic structure of these classification rules; for example, if we know that the algorithm we want to analyze will only output logical expressions that are conjunctions of variables, we can use this fact to facilitate our analysis.;We use a probabilistic framework for machine learning, often called the pac model. In this framework, one asks whether or not a machine learning algorithm has a high probability of generating classification rules that "usually" make the right classification (pac means probably approximately correct). Research in the pac framework can be divided into two subfields. The first field is concerned with the amount of training data that is needed for successful learning to take place (success being defined in terms of generalization ability); the second field is concerned with the computational complexity of learning once the training data have been selected. Since most existing algorithms use heuristics to deal with the problem of complexity, we are primarily concerned with the amount of training data that algorithms require.
APA, Harvard, Vancouver, ISO, and other styles
22

Balch, Tucker. "Behavioral diversity in learning robot teams." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lundin, Lowe. "Artificial Intelligence for Data Center Power Consumption Optimisation." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447627.

Full text
Abstract:
The aim of the project was to implement a machine learning model to optimise the power consumption of Ericsson’s Kista data center. The approach taken was to use a Reinforcement Learning agent trained in a simulation environment based on data specific to the data center. In this manner, the machine learning model could find interactions between parameters, both general and site specific in ways that a sophisticated algorithm designed by a human never could. In this work it was found that a neural network can effectively mimic a real data center and that the Reinforcement Learning policy "TD3" could, within the simulated environment, consistently and convincingly outperform the control policy currently in use at Ericsson’s Kista data center.
APA, Harvard, Vancouver, ISO, and other styles
24

Sephton, Nicholas. "Applying artificial intelligence and machine learning techniques to create varying play style in artificial game opponents." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/17331/.

Full text
Abstract:
Artificial Intelligence is quickly becoming an integral part of the modern world, employed in almost every modern industry we interact with. Whether it be self-drive cars, integration with our web clients or the creation of actual intelligent companions such as Xiaoice1, artificial intelligence is now an integrated and critical part of our daily existence. The application of artificial intelligence to games has been explored for several decades, with many agents now competing at a high level in strategic games which prove challenging for human players (e.g. Go and Chess). With artificial intelligence now able to produce strong opponents for many games, we are more concerned with the style of play of artificial agents, rather than simply their strength. Our work here focusses on the modification of artificial game opponents to create varied playstyle in complex games. We explore several techniques of modifying Monte Carlo Tree Search in an attempt to create different styles of play, thus changing the experience for human opponents playing against them. We also explore improving artificial agent strength, both by investigating parallelization of MCTS and by using Association Rule Mining to predict opponent’s choices, thus improving our ability to play well against them.
APA, Harvard, Vancouver, ISO, and other styles
25

Mollaret, Sébastien. "Artificial intelligence algorithms in quantitative finance." Thesis, Paris Est, 2021. http://www.theses.fr/2021PESC2002.

Full text
Abstract:
L'intelligence artificielle est devenue de plus en plus populaire en finance quantitative avec l'augmentation des capacités de calcul ainsi que de la complexité des modèles et a conduit à de nombreuses applications financières. Dans cette thèse, nous explorons trois applications différentes pour résoudre des défis concernant le domaine des dérivés financiers allant de la sélection de modèle, à la calibration de modèle ainsi que la valorisation des dérivés. Dans la Partie I, nous nous intéressons à un modèle avec changement de régime de volatilité afin de valoriser des dérivés sur actions. Les paramètres du modèle sont estimés à l'aide de l'algorithme d'Espérance-Maximisation (EM) et une composante de volatilité locale est ajoutée afin que le modèle soit calibré sur les prix d'options vanilles à l'aide de la méthode particulaire. Dans la Partie II, nous utilisons ensuite des réseaux de neurones profonds afin de calibrer un modèle à volatilité stochastique, dans lequel la volatilité est représentée par l'exponentielle d'un processus d'Ornstein-Uhlenbeck, afin d'approximer la fonction qui lie les paramètres du modèle aux volatilités implicites correspondantes hors ligne. Une fois l'approximation couteuse réalisée hors ligne, la calibration se réduit à un problème d'optimisation standard et rapide. Dans la Partie III, nous utilisons enfin des réseaux de neurones profonds afin de valorisation des options américaines sur de grands paniers d'actions pour surmonter la malédiction de la dimension. Différentes méthodes sont étudiées avec une approche de type Longstaff-Schwartz, où nous approximons les valeurs de continuation, et une approche de type contrôle stochastique, où nous résolvons l'équation différentielle partielle de valorisation en la reformulant en problème de contrôle stochastique à l'aide de la formule de Feynman-Kac non linéaire
Artificial intelligence has become more and more popular in quantitative finance given the increase of computer capacities as well as the complexity of models and has led to many financial applications. In the thesis, we have explored three different applications to solve financial derivatives challenges, from model selection, to model calibration and pricing. In Part I, we focus on a regime-switching model to price equity derivatives. The model parameters are estimated using the Expectation-Maximization (EM) algorithm and a local volatility component is added to fit vanilla option prices using the particle method. In Part II, we then use deep neural networks to calibrate a stochastic volatility model, where the volatility is modelled as the exponential of an Ornstein-Uhlenbeck process, by approximating the mapping between model parameters and corresponding implied volatilities offline. Once the expensive approximation has been performed offline, the calibration reduces to a standard & fast optimization problem.In Part III, we finally use deep neural networks to price American option on large baskets to solve the curse of the dimensionality. Different methods are studied with a Longstaff-Schwartz approach, where we approximate the continuation values, and a stochastic control approach, where we solve the pricing partial differential equation by reformulating the problem as a stochastic control problem using the non-linear Feynman-Kac formula
APA, Harvard, Vancouver, ISO, and other styles
26

Knight, Trevor. "Analysis of trumpet tone quality using machine learning and audio feature selection." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110681.

Full text
Abstract:
This work examines which audio features, the components of recorded sound, are most relevant to trumpet tone quality by using classification and feature selection. A total of 10 trumpet players with a variety of experience levels were recorded playing the same notes under the same conditions. Twelve musical instrumentalists listened to the notes and provided subjective ratings of the tone quality on a seven-point Likert scale to provide training data for classification. The initial experiment verified that there is statistical agreement between human raters on tone quality and that it was possible to train a support vector machine (SVM) classifier to identify different levels of tone quality with success of 72% classification accuracy with the notes split into two classes and 46% when using seven classes. In the main experiment, different types of feature selection algorithms were applied to the 164 possible audio features to select high-performing subsets. The baseline set of all 164 audio features obtained a classification accuracy of 58.9% with seven classes tested with cross-validation. Ranking, sequential floating forward selection, and genetic search produced accuracies of 43.8%, 53.6%, and 59.6% with 20, 21, and 74 features, respectively. Future work in this field could focus on more nuanced interpretations of tone quality or on the applicability to other instruments.
Ce travail examine les caractéristique acoustique, c.-à-d. les composantes de l'enregistrement sonore, les plus pertinentes pour la qualité du timbre de trompette à l'aide de la classification automatique et de la sélection de caractéristiques. Un total de 10 joueurs de trompette de niveau varié, jouant les mêmes notes dans les mêmes conditions, a été enregistré. Douze instrumentistes de musique ont écouté les enregistrements et ont fourni des évaluations subjectives de la qualité du timbre sur une échelle de Likert à sept points afin de fournir des données d'entrainement du système de classification. La première expérience a vérifié qu'il existe une correlation statistique entre les évaluateurs humains sur la qualité du timbre et qu'il était possible de former un système de classification de type machine à vecteurs de support pour identifier les différents niveaux de qualité du timbre avec un succès de précision de la classification de 72% pour les notes quand divisées en deux classes et 46% lors de l'utilisation de sept classes. Dans l'expérience principale, différents types d'algorithmes de sélection de caractéristiques ont été appliqués aux 164 fonctions au- dio possibles pour sélectionner les sous-ensembles les plus performants. L'ensemble de toutes les 164 fonctions audio a obtenu une précision de classification de 58,9% avec sept classes testées par validation croisée. Les algorithmes de "ranking," "sequential floating forward selection," et génétiques produisent une précision respective de 43,8%, 53,6% et 59,6% avec 20, 21 et 74 caractéristiques. Les futurs travaux dans ce domaine pourraient se concentrer sur des interprétations plus nuancées de la qualité du timbre ou sur l'applicabilité à d'autres instruments.
APA, Harvard, Vancouver, ISO, and other styles
27

Watkins, Andrew B. "AIRS: a resource limited artificial immune classifier." Master's thesis, Mississippi State : Mississippi State University, 2001. http://library.msstate.edu/etd/show.asp?etd=etd-11052001-102048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

North, Charles. "The automatic detection and learning of affordances for locomotion." Thesis, University of Sussex, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bloomingdale, Peter. "Machine Learning and Network-Based Systems Toxicology Modeling of Chemotherapy-Induced Peripheral Neuropathy." Thesis, State University of New York at Buffalo, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13427432.

Full text
Abstract:

The overarching goal of my thesis work was to utilize the combination of mathematical and experimental models towards an effort to resolve chemotherapy-induced peripheral neuropathy (CIPN), one of the most common adverse effects of cancer chemotherapy. In chapter two, we have developed quantitative-structure toxicity relationship (QSTR) models using machine learning algorithms that enable the prediction of peripheral neuropathy incidence solely from a chemicals molecular structure. The QSTR models enable the prediction of clinical neurotoxicity, which could be potentially useful in early drug discovery to screen out compounds that are highly neurotoxic and identify safer drug candidates to move forward into further development. The QSTR model was used to suggest modifications to the molecular structure of bortezomib that may reduce the number of patients who develop peripheral neuropathy from bortezomib therapy. In the third chapter, we conducted a network-based comparative systems pharmacology analysis of proteasome inhibitions. The concept behind this work was to use in silico pharmacological interaction networks to elucidate the neurotoxic differences between bortezomib and carfilzomib. Our theoretical results suggested the importance of the unfolded protein response in bortezomib neurotoxicity and that the mechanisms of neurotoxicity by proteasome inhibitors closely relate to the pathogenesis of Guillian-Barré syndrome caused by the Epstein-Barr virus. In chapter four we have written a review article to introduce the concept of Boolean network modeling in systems pharmacology. Due to the lack of knowledge about parameter values that govern the cellular dynamic processes involved in peripheral nerve damage, the development of a quantitative systems pharmacology model would not be feasible. Therefore, in chapter five, we developed a Boolean network-based systems pharmacology model of intracellular signaling and gene regulation in peripheral neurons. The model was used to simulate the neurotoxic effects of bortezomib and to identify potential treatment strategies for proteasome-inhibitor induced peripheral neuropathy. A novel combinatorial treatment strategy was identified that consists of a TNF? inhibitor, NMDA receptor antagonist, and reactive oxygen species inhibitor. Our subsequent goals were aimed towards translating this finding with the endeavor to hopefully one-day impact human health. Initially we had proposed to use three separate agents for each of these targets, however the clinical administration of three agents to prevent the neurotoxicity of one is likely unfeasible. We then came across a synthetic cannabinoid derivative, dexanabinol, that promiscuously inhibits all three of these targets and was previously developed for its intended use to treat traumatic brain injury. We believe that this drug candidate was worth investigating due to the overlapping pharmacological activity with suggested targets from network analyses, previously established favorable safety profile in humans, notable in vitro/vivo neuroprotective properties, and rising popularity for the therapeutic potential of cannabinoids to treat CIPN. In chapter six we assessed the efficacy of dexanabinol for preventing the neurotoxic effects of bortezomib in various experimental models. Due to the limited translatability of 2D cell culture techniques, we investigated the pharmacodynamics of dexanabinol using a microphysiological model of the peripheral nerve. Bortezomib caused a reduction in electrophysiological endpoints, which were partially restored by dexanabinol. In chapter 7 we evaluated the possible interaction of dexanabinol on the anti-cancer effects of bortezomib. We observed no significant differences in tumor volume between bortezomib alone and in combination with dexanabinol in a multiple myeloma mouse model. Lastly, we are currently investigating the efficacy of dexanabinol in well-established rat model of bortezomib-induced peripheral neuropathy. We believe that positive results would warrant a clinical trial. In conclusion, the statistical and mechanistic models of peripheral neuropathy that were developed could be used to reduce the overall burden of CIPN through the design of safer chemotherapeutics and discovery of novel neuroprotective treatment strategies.

APA, Harvard, Vancouver, ISO, and other styles
30

Saulnier-Comte, Guillaume. "A machine learning toolbox for the development of personalized epileptic seizure detection algorithms." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119550.

Full text
Abstract:
Epilepsy is a chronic neurological disorder affecting around 50 million people worldwide. It is characterized by the occurrence of seizures; a transient clinical event caused by synchronous and/or abnormal and excessive neuronal activity in the brain. This thesis presents a novel machine learning toolbox that generates personalized epileptic seizure detection algorithms exploiting the information contained in electroencephalographic recordings. A large variety of features designed by the seizure detection/prediction community are implemented. This broad set of features is tailored to specific patients through the use of automated feature selection techniques. Subsequently, the resulting information is exploited by a complex machine learning classifier that is able to detect seizures in real-time. The algorithm generation procedure uses a default set of parameters, requiring no prior knowledge on the patients' conditions. Moreover, the amount of data required during the generation of an algorithm is small. The performance of the toolbox is evaluated using cross-validation, a sound methodology, on subjects present in three different publicly available datasets. We report state of the art results: detection rates ranging from 76% to 86% with median false positive rates under 2 per day. The toolbox, as well as a new dataset, are made publicly available in order to improve the knowledge on the disorder and reduce the overhead of creating derived algorithms.
L'épilepsie est un trouble neurologique cérébral chronique qui touche environ 50 millions de personnes dans le monde. Cette maladie est caractérisée par la présence de crises d'épilepsie; un événement clinique transitoire causé par une activité cérébrale synchronisée et/ou anormale et excessive. Cette thèse présente un nouvel outil, utilisant des techniques d'apprentissage automatique, capable de générer des algorithmes personnalisés pour la détection de crises épileptiques qui exploitent l'information contenue dans les enregistrements électroencéphalographiques. Une grande variété de caractéristiques conçues pour la recherche en détection/prédiction de crises ont été implémentées. Ce large éventail d'information est adapté à chaque patient grâce à l'utilisation de techniques de sélection de caractéristiques automatisées. Par la suite, l'information découlant de cette procédure est utilisée par un modèle de décision complexe, qui peut détecter les crises en temps réel. La performance des algorithmes est évaluée en utilisant une validation croisée sur des sujets présents dans trois ensembles de données accessibles au public. Nous observons des résultats dignes de l'état de l'art: des taux de détections allant de 76% à 86% avec des taux de faux positifs médians en deçà de 2 par jour. L'outil ainsi qu'un nouvel ensemble de données sont rendus publics afin d'améliorer les connaissances sur la maladie et réduire la surcharge de travail causée par la création d'algorithmes dérivés.
APA, Harvard, Vancouver, ISO, and other styles
31

Mitchell, Matthew Winston 1968. "An architecture for situated learning agents." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/5553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Cui. "Image quality assessment using algorithmic and machine learning techniques." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Restricted: no access until June 2, 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=26521.

Full text
Abstract:
Thesis (Ph.D.)--Aberdeen University, 2009.
With: An image quality metric based in corner, edge and symmetry maps / Li Cui, Alastair R. Allen. With: An image quality metric based on a colour appearance model / Li Cui and Alastair R. Allen. ACIVS / J. Blanc-Talon et al. eds. 2008 LNCS 5259, 696-707. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Taylor, Farrell R. "Evaluation of Supervised Machine Learning for Classifying Video Traffic." NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/972.

Full text
Abstract:
Operational deployment of machine learning based classifiers in real-world networks has become an important area of research to support automated real-time quality of service decisions by Internet service providers (ISPs) and more generally, network administrators. As the Internet has evolved, multimedia applications, such as voice over Internet protocol (VoIP), gaming, and video streaming, have become commonplace. These traffic types are sensitive to network perturbations, e.g. jitter and delay. Automated quality of service (QoS) capabilities offer a degree of relief by prioritizing network traffic without human intervention; however, they rely on the integration of real-time traffic classification to identify applications. Accordingly, researchers have begun to explore various techniques to incorporate into real-world networks. One method that shows promise is the use of machine learning techniques trained on sub-flows – a small number of consecutive packets selected from different phases of the full application flow. Generally, research on machine learning classifiers was based on statistics derived from full traffic flows, which can limit their effectiveness (recall and precision) if partial data captures are encountered by the classifier. In real-world networks, partial data captures can be caused by unscheduled restarts/reboots of the classifier or data capture capabilities, network interruptions, or application errors. Research on the use of machine learning algorithms trained on sub-flows to classify VoIP and gaming traffic has shown promise, even when partial data captures are encountered. This research extends that work by applying machine learning algorithms trained on multiple sub-flows to classification of video streaming traffic. Results from this research indicate that sub-flow classifiers have much higher and more consistent recall and precision than full flow classifiers when applied to video traffic. Moreover, the application of ensemble methods, specifically Bagging and adaptive boosting (AdaBoost) further improves recall and precision for sub-flow classifiers. Findings indicate sub-flow classifiers based on AdaBoost in combination with the C4.5 algorithm exhibited the best performance with the most consistent results for classification of video streaming traffic.
APA, Harvard, Vancouver, ISO, and other styles
34

Uddin, Muhammad Fahim. "Enhanced Machine Learning Engine Engineering Using Innovative Blending, Tuning, and Feature Optimization." Thesis, University of Bridgeport, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13427950.

Full text
Abstract:

Investigated into and motivated by Ensemble Machine Learning (ML) techniques, this thesis contributes to addressing performance, consistency, and integrity issues such as overfitting, underfitting, predictive errors, accuracy paradox, and poor generalization for the ML models. Ensemble ML methods have shown promising outcome when a single algorithm failed to approximate the true prediction function. Using meta-learning, a super learner is engineered by combining weak learners. Generally, several methods in Supervised Learning (SL) are evaluated to find the best fit to the underlying data and predictive analytics (i.e., “No Free Lunch” Theorem relevance). This thesis addresses three main challenges/problems, i) determining the optimum blend of algorithms/methods for enhanced SL ensemble models, ii) engineering the selection and grouping of features that aggregate to the highest possible predictive and non-redundant value in the training data set, and iii) addressing the performance integrity issues such as accuracy paradox. Therefore, an enhanced Machine Learning Engine Engineering (eMLEE) is inimitably constructed via built-in parallel processing and specially designed novel constructs for error and gain functions to optimally score the classifier elements for improved training experience and validation procedures. eMLEE, as based on stochastic thinking, is built on; i) one centralized unit as Logical Table unit (LT), ii) two explicit units as enhanced Algorithm Blend and Tuning ( eABT) and enhanced Feature Engineering and Selection (eFES ), and two implicit constructs as enhanced Weighted Performance Metric (eWPM) and enhanced Cross Validation and Split ( eCVS). Hence, it proposes an enhancement to the internals of the SL ensemble approaches.

Motivated by nature inspired metaheuristics algorithms (such as GA, PSO, ACO, etc.), feedback mechanisms are improved by introducing a specialized function as Learning from the Mistakes ( LFM) to mimic the human learning experience. LFM has shown significant improvement towards refining the predictive accuracy on the testing data by utilizing the computational processing of wrong predictions to increase the weighting scoring of the weak classifiers and features. LFM further ensures the training layer experiences maximum mistakes (i.e., errors) for optimum tuning. With this designed in the engine, stochastic modeling/thinking is implicitly implemented.

Motivated by OOP paradigm in the high-level programming, eMLEE provides interface infrastructure using LT objects for the main units (i.e., Unit A and Unit B) to use the functions on demand during the classifier learning process. This approach also assists the utilization of eMLEE API by the outer real-world usage for predictive modeling to further customize the classifier learning process and tuning elements trade-off, subject to the data type and end model in goal.

Motivated by higher dimensional processing and Analysis (i.e. , 3D) for improved analytics and learning mechanics, eMLEE incorporates 3D Modeling of fitness metrics such as x for overfit, y for underfit, and z for optimum fit, and then creates logical cubes using LT handles to locate the optimum space during ensemble process. This approach ensures the fine tuning of ensemble learning process with improved accuracy metric.

To support the built and implementation of the proposed scheme, mathematical models (i.e., Definitions, Lemmas, Rules, and Procedures) along with the governing algorithms’ definitions (and pseudo-code), and necessary illustrations (to assist in elaborating the concepts) are provided. Diverse sets of data are used to improve the generalization of the engine and tune the underlying constructs during development-testing phases. To show the practicality and stability of the proposed scheme, several results are presented with a comprehensive analysis of the outcomes for the metrics (i.e., via integrity, corroboration, and quantification) of the engine. Two approaches are followed to corroborate the engine, i) testing inner layers (i.e., internal constructs) of the engine (i.e., Unit-A, Unit-B, and C-Unit) to stabilize and test the fundamentals, and ii) testing outer layer (i.e., engine as a black box ) for standard measuring metrics for the real-world endorsement. Comparison with various existing techniques in the state of the art are also reported. In conclusion of the extensive literature review, research undertaken, investigative approach, engine construction and tuning, validation approach, experimental study, and results visualization, the eMLEE is found to be outperforming the existing techniques most of the time, in terms of the classifier learning, generalization, metrics trade-off, optimum-fitness, feature engineering, and validation.

APA, Harvard, Vancouver, ISO, and other styles
35

Mao, Yida 1972. "A metrics based detection of reusable object-oriented software components using machine learning algorithm /." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21601.

Full text
Abstract:
Since the emergence of the object technology, organizations have accumulated a tremendous amount of object-oriented (OO) code. Instead of continuing to recreate components similar to existing artifacts, and considering the rising costs of development, many organizations would like to decrease software development costs and cycle time by reusing existing OO components. The difficulty of finding reusable components is that reuse is a complex and thus less quantifiable measure. In this research, we first proposed three reuse hypotheses about the impact of three internal characteristics (inheritance, coupling, and complexity) of OO software artifacts on reusability. Corresponding metrics suites were then selected and extracted. We used C4.5, a machine learning algorithm, to build predictive models from the learning data set that we have obtained from a medium sized software system developed in C++. Each predictive models was then verified according to its completeness, correctness and global accuracy. The verification results proved that the proposed hypotheses were correct. The uniqueness of this research work is that we have combined the state of the art of three different subjects (reuse detection and prediction, OO metrics and their extraction, and applied machine learning algorithm) to form a process of finding interesting properties of OO software components that affect reusability.
APA, Harvard, Vancouver, ISO, and other styles
36

Letourneau, Sylvain. "Identification of attribute interactions and generation of globally relevant continuous features in machine learning." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29029.

Full text
Abstract:
Datasets found in real world applications of machine learning are often characterized by low-level attributes with important interactions among them. Such interactions may increase the complexity of the learning task by limiting the usefulness of the attributes to dispersed regions of the representation space. In such cases, we say that the attributes are locally relevant. To obtain adequate performance with locally relevant attributes, the learning algorithm must be able to analyse the interacting attributes simultaneously and fit an appropriate model for the type of interactions observed. This is a complex task that surpasses the ability of most existing machine learning systems. This research proposes a solution to this problem by extending the initial representation with new globally relevant features. The new features make explicit the important information that was previously hidden by the initial interactions, thus reducing the complexity of the learning task. This dissertation first proposes an idealized study of the potential benefits of globally relevant features assuming perfect knowledge of the interactions among the initial attributes. This study involves synthetic data and a variety of machine learning systems. Recognizing that not all interactions produce a negative effect on performance, the dissertation introduces a novel technique named Relevance graphs to identify the interactions that negatively affect the performance of existing learning systems. The tool of interactive relevance graphs addresses another important need by providing the user with an opportunity to participate in the construction of a new representation that cancels the effects of the negative attribute interactions. The dissertation extends the concept of relevance graphs by introducing a series of algorithms for the automatic discovery of appropriate transformations. We use the named GLOREF (GLObally RElevant Features) to designate the approach that integrates these algorithms. The dissertation fully describes the GLOREF approach along with an extensive empirical evaluation with both synthetic and UCI datasets. This evaluation shows that the features produced by the GLOREF approach significantly improve the accuracy with both synthetic and real-world data.
APA, Harvard, Vancouver, ISO, and other styles
37

Choi, Chiyoung. "Predicting Customer Complaints in Mobile Telecom Industry Using Machine Learning Algorithms." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10791168.

Full text
Abstract:

Mobile telecom industry competition has been fierce for decades, therefore increasing the importance of customer retention. Most mobile operators consider customer complaints as a key factor of customer retention. We implement machine learning algorithms to predict the customer complaints of a Korean mobile telecom company. We used four machine learning algorithms ANN (Artificial Neural Network), SVM (Support Vector Machine), KNN (K-Nearest Neighbors) and DT (Decision Tree). Our experiment utilized a database of 10,000 Korean mobile market subscribers and the variables of gender, age, device manufacturer, service quality, and complaint status. We found that ANN’s prediction performance outperformed other algorithms. We also propose the segmented-prediction model for better accuracy and practical usage. Segments of the customer group are examined by gender, age, and device manufacturer. Prediction power is better for female, older customers, and the non-iPhone groups than other segment groups. The highest accuracy s ANN’s 87.3% prediction for the 60s group.

APA, Harvard, Vancouver, ISO, and other styles
38

Vieira, Fábio Henrique Antunes [UNESP]. "Image processing through machine learning for wood quality classification." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/142813.

Full text
Abstract:
Submitted by FÁBIO HENRIQUE ANTUNES VIEIRA null (curso_structural@hotmail.com) on 2016-08-03T12:43:17Z No. of bitstreams: 1 Fábio Henrique Antunes Vieira TESE.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5)
Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-08-04T19:15:49Z (GMT) No. of bitstreams: 1 vieira_fha_dr_guara.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5)
Made available in DSpace on 2016-08-04T19:15:49Z (GMT). No. of bitstreams: 1 vieira_fha_dr_guara.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5) Previous issue date: 2016-06-30
A classificação da qualidade da madeira é indicada para indústria de processamento e produção desse material. Essas empresas têm investido em soluções para agregar valor à matéria-prima, com o intuito de melhorar resultados, observando os rumos do mercado. O objetivo deste trabalho foi comparar Redes Neurais Convolutivas, um método de aprendizado profundo, na classificação da qualidade de madeira, com outras técnicas tradicionais de Máquinas de aprendizado, como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais, em conjunto com Descritores de Textura. Isso foi possível através da verificação do nível de acurácia das experiências com diferentes técnicas, como Aprendizado Profundo e Descritores de Textura no processamento de imagens destes objetos. Foi utilizada uma câmera convencional para capturar as 374 amostras de imagem adotadas no experimento, e a base de dados está disponível para consulta. O processamento das imagens passou por algumas fases, após terem sido obtidas, como pré-processamento, segmentação, análise de recursos e classificação. Os métodos de classificação se deram através de Aprendizado Profundo e por meio de técnicas de Aprendizado de Máquinas tradicionais como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais juntamente com os Descritores de Textura. Os resultados empíricos para o conjunto de dados das imagens da madeira serrada mostraram que o método com Descritores de Textura, independentemente da estratégia empregada, foi muito competitivo quando comparado com as Redes Neurais Convolutivas para todos os experimentos realizados, e até mesmo superou-as para esta aplicação.
The quality classification of wood is prescribed throughout the wood chain industry, particularly those from the processing and manufacturing fields. Those organizations have invested energy and time trying to increase value of basic items, with the purpose of accomplishing better results, in agreement to the market. The objective of this work was to compare Convolutional Neural Network, a deep learning method, for wood quality classification to other traditional Machine Learning techniques, namely Support Vector Machine (SVM), Decision Trees (DT), K-Nearest Neighbors (KNN), and Neural Networks (NN) associated with Texture Descriptors. Some of the possible options were to assess the predictive performance through the experiments with different techniques, Deep Learning and Texture Descriptors, for processing images of this material type. A camera was used to capture the 374 image samples adopted on the experiment, and their database is available for consultation. The images had some stages of processing after they have been acquired, as pre-processing, segmentation, feature analysis, and classification. The classification methods occurred through Deep Learning, more specifically Convolutional Neural Networks - CNN, and using Texture Descriptors with Support Vector Machine, Decision Trees, K-nearest Neighbors and Neural Network. Empirical results for the image dataset showed that the approach using texture descriptor method, regardless of the strategy employed, is very competitive when compared with CNN for all performed experiments, and even overcome it for this application.
APA, Harvard, Vancouver, ISO, and other styles
39

Kuscu, Ibrahim. "Evolutionary generalisation and genetic programming." Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Shafin, Rubayet. "3D Massive MIMO and Artificial Intelligence for Next Generation Wireless Networks." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97633.

Full text
Abstract:
3-dimensional (3D) massive multiple-input-multiple-output (MIMO)/full dimensional (FD) MIMO and application of artificial intelligence are two main driving forces for next generation wireless systems. This dissertation focuses on aspects of channel estimation and precoding for 3D massive MIMO systems and application of deep reinforcement learning (DRL) for MIMO broadcast beam synthesis. To be specific, downlink (DL) precoding and power allocation strategies are identified for a time-division-duplex (TDD) multi-cell multi-user massive FD-MIMO network. Utilizing channel reciprocity, DL channel state information (CSI) feedback is eliminated and the DL multi-user MIMO precoding is linked to the uplink (UL) direction of arrival (DoA) estimation through estimation of signal parameters via rotational invariance technique (ESPRIT). Assuming non-orthogonal/non-ideal spreading sequences of the UL pilots, the performance of the UL DoA estimation is analytically characterized and the characterized DoA estimation error is incorporated into the corresponding DL precoding and power allocation strategy. Simulation results verify the accuracy of our analytical characterization of the DoA estimation and demonstrate that the introduced multi-user MIMO precoding and power allocation strategy outperforms existing zero-forcing based massive MIMO strategies. In 3D massive MIMO systems, especially in TDD mode, a base station (BS) relies on the uplink sounding signals from mobile stations to obtain the spatial information for downlink MIMO processing. Accordingly, multi-dimensional parameter estimation of MIMO channel becomes crucial for such systems to realize the predicted capacity gains. In this work, we also study the joint estimation of elevation and azimuth angles as well as the delay parameters for 3D massive MIMO orthogonal frequency division multiplexing (OFDM) systems under a parametric channel modeling. We introduce a matrix-based joint parameter estimation method, and analytically characterize its performance for massive MIMO OFDM systems. Results show that antenna array configuration at the BS plays a critical role in determining the underlying channel estimation performance, and the characterized MSEs match well with the simulated ones. Also, the joint parametric channel estimation outperforms the MMSEbased channel estimation in terms of the correlation between the estimated channel and the real channel. Beamforming in MIMO systems is one of the key technologies for modern wireless communication. Creating wide common beams are essential for enhancing the coverage of cellular network and for improving the broadcast operation for control signals. However, in order to maximize the coverage, patterns for broadcast beams need to be adapted based on the users' movement over time. In this dissertation, we present a MIMO broadcast beam optimization framework using deep reinforcement learning. Our proposed solution can autonomously and dynamically adapt the MIMO broadcast beam parameters based on user' distribution in the network. Extensive simulation results show that the introduced algorithm can achieve the optimal coverage, and converge to the oracle solution for both single cell and multiple cell environment and for both periodic and Markov mobility patterns.
Doctor of Philosophy
Multiple-input-multiple-output (MIMO) is a technology where a transmitter with multiple antennas communicates with one or multipe receivers having multiple antennas. 3- dimensional (3D) massive MIMO is a recently developed technology where a base station (BS) or cell tower with a large number of antennas placed in a two dimensional array communicates with hundreds of user terminals simultaneously. 3D massive MIMO/full dimensional (FD) MIMO and application of artificial intelligence are two main driving forces for next generation wireless systems. This dissertation focuses on aspects of channel estimation and precoding for 3D massive MIMO systems and application of deep reinforcement learning (DRL) for MIMO broadcast beam synthesis. To be specific, downlink (DL) precoding and power allocation strategies are identified for a time-division-duplex (TDD) multi-cell multi-user massive FD-MIMO network. Utilizing channel reciprocity, DL channel state information (CSI) feedback is eliminated and the DL multi-user MIMO precoding is linked to the uplink (UL) direction of arrival (DoA) estimation through estimation of signal parameters via rotational invariance technique (ESPRIT). Assuming non-orthogonal/non-ideal spreading sequences of the UL pilots, the performance of the UL DoA estimation is analytically characterized and the characterized DoA estimation error is incorporated into the corresponding DL precoding and power allocation strategy. Simulation results verify the accuracy of our analytical characterization of the DoA estimation and demonstrate that the introduced multi-user MIMO precoding and power allocation strategy outperforms existing zero-forcing based massive MIMO strategies. In 3D massive MIMO systems, especially in TDD mode, a BS relies on the uplink sounding signals from mobile stations to obtain the spatial information for downlink MIMO processing. Accordingly, multi-dimensional parameter estimation of MIMO channel becomes crucial for such systems to realize the predicted capacity gains. In this work, we also study the joint estimation of elevation and azimuth angles as well as the delay parameters for 3D massive MIMO orthogonal frequency division multiplexing (OFDM) systems under a parametric channel modeling. We introduce a matrix-based joint parameter estimation method, and analytically characterize its performance for massive MIMO OFDM systems. Results show that antenna array configuration at the BS plays a critical role in determining the underlying channel estimation performance, and the characterized MSEs match well with the simulated ones. Also, the joint parametric channel estimation outperforms the MMSE-based channel estimation in terms of the correlation between the estimated channel and the real channel. Beamforming in MIMO systems is one of the key technologies for modern wireless communication. Creating wide common beams are essential for enhancing the coverage of cellular network and for improving the broadcast operation for control signals. However, in order to maximize the coverage, patterns for broadcast beams need to be adapted based on the users' movement over time. In this dissertation, we present a MIMO broadcast beam optimization framework using deep reinforcement learning. Our proposed solution can autonomously and dynamically adapt the MIMO broadcast beam parameters based on user' distribution in the network. Extensive simulation results show that the introduced algorithm can achieve the optimal coverage, and converge to the oracle solution for both single cell and multiple cell environment and for both periodic and Markov mobility patterns.
APA, Harvard, Vancouver, ISO, and other styles
41

Qi, Dehu. "Multi-agent systems : integrating reinforcement learning, bidding and genetic algorithms /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3060133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Prueller, Hans. "Distributed online machine learning for mobile care systems." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10875.

Full text
Abstract:
Telecare and especially Mobile Care Systems are getting more and more popular. They have two major benefits: first, they drastically improve the living standards and even health outcomes for patients. In addition, they allow significant cost savings for adult care by reducing the needs for medical staff. A common drawback of current Mobile Care Systems is that they are rather stationary in most cases and firmly installed in patients’ houses or flats, which makes them stay very near to or even in their homes. There is also an upcoming second category of Mobile Care Systems which are portable without restricting the moving space of the patients, but with the major drawback that they have either very limited computational abilities and only a rather low classification quality or, which is most frequently, they only have a very short runtime on battery and therefore indirectly restrict the freedom of moving of the patients once again. These drawbacks are inherently caused by the restricted computational resources and mainly the limitations of battery based power supply of mobile computer systems. This research investigates the application of novel Artificial Intelligence (AI) and Machine Learning (ML) techniques to improve the operation of 2 Mobile Care Systems. As a result, based on the Evolving Connectionist Systems (ECoS) paradigm, an innovative approach for a highly efficient and self-optimising distributed online machine learning algorithm called MECoS - Moving ECoS - is presented. It balances the conflicting needs of providing a highly responsive complex and distributed online learning classification algorithm by requiring only limited resources in the form of computational power and energy. This approach overcomes the drawbacks of current mobile systems and combines them with the advantages of powerful stationary approaches. The research concludes that the practical application of the presented MECoS algorithm offers substantial improvements to the problems as highlighted within this thesis.
APA, Harvard, Vancouver, ISO, and other styles
43

Harbert, Christopher W. Shang Yi. "An application of machine learning techniques to interactive, constraint-based search." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/4324.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2005.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (December 12, 2006) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
44

Foster, Kate Yvonne, and kate foster@dsto defence gov au. "An investigation of the use of past experience in single and multiple agent learning classifier systems." Swinburne University of Technology. Centre for Intelligent Systems & Complex Processes, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20051117.112922.

Full text
Abstract:
The field of agent control is concerned with the design and implementation of components that form an agent's control architecture. The interaction between these components determines how an agent?s sensor data and internal state combine to direct how the agent will act. Rule-based systems couple sensing and action in the form of condition-action rules and one class of such systems, learning classifier systems, has been extensively used in the design of adaptive agents. An adaptive agent explores an often unknown environment and uses its experience in its environment with the aim of improving its performance over time. The data an adaptive agent receives regarding the current state of its environment might be limited and ambiguous. In learning classifier systems, three different approaches to the problem of limited and ambiguous data from the environment have been: (1) to enable the agent to learn from its past experience, (2) to develop sequences of rules (in which rules may be linked implicitly or explicitly) and (3) multiagent LCSs. This thesis investigates the use of an adaptive agent?s past experience as a resource with which to perform a number of functions internal to the agent. These functions involve developing explicit sequences of rules, detecting and escaping from infinite loops, and firing and reinforcing rules. The first part of this thesis documents the design, implementation and evaluation of a control system that incorporates these functions. The control system is realised as a learning classifier system and is evaluated through experiments in a number of environments that provide limited and ambiguous stimuli. These experiments test the impact of explicit sequences of rules on the performance of a learning classifier system more thoroughly than previous research achieved. The use of explicit sequences of rules results in mixed performance in these environments and it is shown that while the use of these sequences in simple environments enables the rule space to be more effectively explored, in complex environments the behaviours developed by these sequences result in the agent stagnating more often in corners of the environment. Rather than endowing the rule-base with more rules, as in previous research with explicit sequences of rules, it is proposed that multiple interacting agents may enhance the exploration of the rule space in more complex environments. This approach is taken in the second part of this thesis, where the control system is used with multiple agents that interact by sharing rules. The aim of this interaction is to enhance the rule discovery process through cooperation between agents and thus improve the performance of the agents in their respective environments. It is shown that the benefit obtained from rule sharing is dependent on the environment and the type and amount of rule sharing used and that rule sharing is generally more beneficial in complex environments compared to simple environments. The properties of the rule-bases developed in these environments are examined in order to account for these results and it is shown that the type and amount of rule sharing most useful in each environment are dependent on these properties.
APA, Harvard, Vancouver, ISO, and other styles
45

Tashman, Michael. "The Association Between Film Industry Success and Prior Career History: A Machine Learning Approach." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:24078355.

Full text
Abstract:
My thesis project is a means of understanding the conditions associated with success and failure in the American film industry. This is carried out by tracking the careers of several thousand actors and actresses, and the number of votes that their movies have received on IMDb. A fundamental characteristic of film career success is that of influence from prior success or failure—consider that an established “star” will almost certainly receive opportunities denied to an unknown actor, or that a successful actor with a string of poorly received films may stop receiving offers for desirable roles. The goal for this project is to to develop an understanding of how these past events are linked with future success. The results of this project show a significant difference in career development between actors and actresses—actors’ career trajectories are significantly influenced by a small number of “make or break” films, while actresses’ careers are based on overall lifetime performance, particularly in an ability to avoid poorly-received films. Indeed, negatively received films are shown to have a distinctly greater influence on actresses’ careers than those that were positively received. These results were obtained from a model using machine learning to find which movies from actors’ and actresses’ pasts tend to have the most predictive information. The parameters for which movies should be included in this set was optimized using a genetic learning algorithm, considering factors such as: film age, whether it was well-received or poorly-received, and if so, to what magnitude, and whether the film fits with the natural periodicity that many actors’ and actresses’ careers exhibit. Results were obtained following an extensive optimization, consisting of approximately 5000 evolutionary steps and 200,000 fitness evaluations, done over 125 hours.
APA, Harvard, Vancouver, ISO, and other styles
46

Ceylan, Hakan. "Using Reinforcement Learning in Partial Order Plan Space." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5232/.

Full text
Abstract:
Partial order planning is an important approach that solves planning problems without completely specifying the orderings between the actions in the plan. This property provides greater flexibility in executing plans; hence making the partial order planners a preferred choice over other planning methodologies. However, in order to find partially ordered plans, partial order planners perform a search in plan space rather than in space of world states and an uninformed search in plan space leads to poor efficiency. In this thesis, I discuss applying a reinforcement learning method, called First-visit Monte Carlo method, to partial order planning in order to design agents which do not need any training data or heuristics but are still able to make informed decisions in plan space based on experience. Communicating effectively with the agent is crucial in reinforcement learning. I address how this task was accomplished in plan space and the results from an evaluation of a blocks world test bed.
APA, Harvard, Vancouver, ISO, and other styles
47

Ikonomovski, Stefan V. "Detection of faulty components in Object-Oriented systems using design metrics and a machine learning algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0025/MQ50796.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

HALLGREN, ROSE. "Machine Dreaming." Thesis, KTH, Arkitektur, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298504.

Full text
Abstract:
Can I create my own design companion? My own design AI? How far do I go using the machine? What are the poetics of machine learning? This thesis is about exploring art and artificial intelligence, specifically machine learning which is the study of computer algorithms that improve through experience. The core thing of what machine learning does is to find patterns in data to then use those patterns to in some way predict the future.  I define a machine which works and generates images according to the given rules. The rules are set in time and in data. The decision, however, as in all creative processes, is up to the creator (in this the architect) so it is as much a part of the creation as the setting up of the data. The method is a mix of my own personality and imagination and the impersonal machine (my computer).  With me during the process, I found inspiration from other creators working with machines in different experimental ways that diverge from the original purpose of their machine/tool. The project is an investigation of contemporary technologies where I try to understand my tool through a series of experiments.
APA, Harvard, Vancouver, ISO, and other styles
49

Swere, Erick A. R. "Machine learning in embedded systems." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/4969.

Full text
Abstract:
This thesis describes novel machine learning techniques specifically designed for use in real-time embedded systems. The techniques directly address three major requirements of such learning systems. Firstly, learning must be capable of being achieved incrementally, since many applications do not have a representative training set available at the outset. Secondly, to guarantee real-time performance, the techniques must be able to operate within a deterministic and limited time bound. Thirdly, the memory requirement must be limited and known a priori to ensure the limited memory available to hold data in embedded systems will not be exceeded. The work described here has three principal contributions. The frequency table is a data structure specifically designed to reduce the memory requirements of incremental learning in embedded systems. The frequency table facilitates a compact representation of received data that is sufficient for decision tree generation. The frequency table decision tree (FTDT) learning method provides classification performance similar to existing decision tree approaches, but extends these to incremental learning while substantially reducing memory usage for practical problems. The incremental decision path (IDP) method is able to efficiently induce, from the frequency table of observations, the path through a decision tree that is necessary for the classification of a single instance. The classification performance of IDP is equivalent to that of existing decision tree algorithms, but since IDP allows the maximum number of partial decision tree nodes to be determined prior to the generation of the path, both the memory requirement and the execution time are deterministic. In this work, the viability of the techniques is demonstrated through application to realtime mobile robot navigation.
APA, Harvard, Vancouver, ISO, and other styles
50

Stanescu, Ana. "Semi-supervised learning for biological sequence classification." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/35810.

Full text
Abstract:
Doctor of Philosophy
Department of Computing and Information Sciences
Doina Caragea
Successful advances in biochemical technologies have led to inexpensive, time-efficient production of massive volumes of data, DNA and protein sequences. As a result, numerous computational methods for genome annotation have emerged, including machine learning and statistical analysis approaches that practically and efficiently analyze and interpret data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data in order to build quality classifiers. The process of labeling data can be expensive and time consuming, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on semi-supervised learning approaches for biological sequence classification. Although an attractive concept, semi-supervised learning does not invariably work as intended. Since the assumptions made by learning algorithms cannot be easily verified without considerable domain knowledge or data exploration, semi-supervised learning is not always "safe" to use. Advantageous utilization of the unlabeled data is problem dependent, and more research is needed to identify algorithms that can be used to increase the effectiveness of semi-supervised learning, in general, and for bioinformatics problems, in particular. At a high level, we aim to identify semi-supervised algorithms and data representations that can be used to learn effective classifiers for genome annotation tasks such as cassette exon identification, splice site identification, and protein localization. In addition, one specific challenge that we address is the "data imbalance" problem, which is prevalent in many domains, including bioinformatics. The data imbalance phenomenon arises when one of the classes to be predicted is underrepresented in the data because instances belonging to that class are rare (noteworthy cases) or difficult to obtain. Ironically, minority classes are typically the most important to learn, because they may be associated with special cases, as in the case of splice site prediction. We propose two main techniques to deal with the data imbalance problem, namely a technique based on "dynamic balancing" (augmenting the originally labeled data only with positive instances during the semi-supervised iterations of the algorithms) and another technique based on ensemble approaches. The results show that with limited amounts of labeled data, semisupervised approaches can successfully leverage the unlabeled data, thereby surpassing their completely supervised counterparts. A type of semi-supervised learning, known as "transductive" learning aims to classify the unlabeled data without generalizing to new, previously not encountered instances. Theoretically, this aspect makes transductive learning particularly suitable for the task of genome annotation, in which an entirely sequenced genome is typically available, sometimes accompanied by limited annotation. We study and evaluate various transductive approaches (such as transductive support vector machines and graph based approaches) and sequence representations for the problems of cassette exon identification. The results obtained demonstrate the effectiveness of transductive algorithms in sequence annotation tasks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography