Dissertations / Theses on the topic 'Operator Learning'

To see the other types of publications on this topic, follow the link: Operator Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Operator Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tummaluri, Raghuram R. "Operator Assignment in Labor Intensive Cells Considering Operation Time Based Skill Levels, Learning and Forgetting." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1126900571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kienzle, Wolf. "Learning an interest operator from human eye movements." Berlin Logos-Verl, 2008. http://d-nb.info/990541908/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schrödl, Stefan J. "Operator valued reproducing kernels and their application in approximation and statistical learning." Aachen Shaker, 2009. http://d-nb.info/99654559X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huusari, Riikka. "Kernel learning for structured data : a study on learning operator - and scalar - valued kernels for multi-view and multi-task learning problems." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0312.

Full text
Abstract:
Aujourd'hui il y a plus en plus des données ayant des structures non-standard. Cela inclut le cadre multi-tâches où chaque échantillon de données est associé à plusieurs étiquettes de sortie, ainsi que le paradigme d'apprentissage multi-vues, dans lequel chaque échantillon de données a de nombreuses descriptions. Il est important de bien modéliser les interactions présentes dans les vues ou les variables de sortie.Les méthodes à noyaux offrent un moyen justifié et élégant de résoudre de problèmes d’apprentissage. Les noyaux à valeurs opérateurs, qui généralisent les noyaux à valeur scalaires, ont récemment fait l’objet d’une attention. Toujours le choix d’une fonction noyau adaptée aux données joue un rôle crucial dans la réussite de la tâche d’apprentissage.Cette thèse propose l’apprentissage des noyaux comme une solution à problèmes d’apprentissage automatique de multi-tâches et multi-vues. Les chapitres deux et trois étudient l’apprentissage des interactions entre données à vues multiples. Le deuxième chapitre considère l'apprentissage inductif supervisé et les interactions sont modélisées avec des noyaux à valeurs opérateurs. Le chapitre trois traite un contexte non supervisé et propose une méthode d’apprentissage du noyau à valeurs scalaires pour compléter les données manquantes dans les matrices à noyaux issues d’un problème à vues multiples. Dans le dernier chapitre, nous passons à un apprentissage à sorties multiples, pour revenir au paradigme de l'apprentissage inductif supervisé. Nous proposons une méthode d’apprentissage de noyaux inséparables à valeurs opérateurs qui modélisent les interactions entre les entrées et de multiples variables de sortie
Nowadays datasets with non-standard structures are more and more common. Examples include the already well-known multi-task framework where each data sample is associated with multiple output labels, as well as the multi-view learning paradigm, in which each data sample can be seen to contain numerous descriptions. To obtain a good performance in tasks like these, it is important to model the interactions present in the views or output variables well.Kernel methods offer a justified and elegant way to solve many machine learning problems. Operator-valued kernels, which generalize the well-known scalar-valued kernels, have gained attention recently as a way to learn vector-valued functions. The choice of a good kernel function plays crucial role for the success on the learning task.This thesis offers kernel learning as a solution for various machine learning problems. Chapters two and three investigate learning the data interactions with multi-view data. In the first of these, the focus is in supervised inductive learning and the interactions are modeled with operator-valued kernels. Chapter three tackles multi-view data and kernel learning in unsupervised context and proposes a scalar-valued kernel learning method for completing missing data in kernel matrices of a multi-view problem. In the last chapter we turn from multi-view to multi-output learning, and return to the supervised inductive learning paradigm. We propose a method for learning inseparable operator-valued kernels that model interactions between inputs and multiple output variables
APA, Harvard, Vancouver, ISO, and other styles
5

Montagner, Igor dos Santos. "W-operator learning using linear models for both gray-level and binary inputs." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-21082017-111455/.

Full text
Abstract:
Image Processing techniques can be used to solve a broad range of problems, such as medical imaging, document processing and object segmentation. Image operators are usually built by combining basic image operators and tuning their parameters. This requires both experience in Image Processing and trial-and-error to get the best combination of parameters. An alternative approach to design image operators is to estimate them from pairs of training images containing examples of the expected input and their processed versions. By restricting the learned operators to those that are translation invariant and locally defined ($W$-operators) we can apply Machine Learning techniques to estimate image transformations. The shape that defines which neighbors are used is called a window. $W$-operators trained with large windows usually overfit due to the lack sufficient of training data. This issue is even more present when training operators with gray-level inputs. Although approaches such as the two-level design, which combines multiple operators trained on smaller windows, partly mitigates these problems, they also require more complicated parameter determination to achieve good results. In this work we present techniques that increase the window sizes we can use and decrease the number of manually defined parameters in $W$-operator learning. The first one, KA, is based on Support Vector Machines and employs kernel approximations to estimate image transformations. We also present adequate kernels for processing binary and gray-level images. The second technique, NILC, automatically finds small subsets of operators that can be successfully combined using the two-level approach. Both methods achieve competitive results with methods from the literature in two different application domains. The first one is a binary document processing problem common in Optical Music Recognition, while the second is a segmentation problem in gray-level images. The same techniques were applied without modification in both domains.
Processamento de imagens pode ser usado para resolver problemas em diversas áreas, como imagens médicas, processamento de documentos e segmentação de objetos. Operadores de imagens normalmente são construídos combinando diversos operadores elementares e ajustando seus parâmetros. Uma abordagem alternativa é a estimação de operadores de imagens a partir de pares de exemplos contendo uma imagem de entrada e o resultado esperado. Restringindo os operadores considerados para o que são invariantes à translação e localmente definidos ($W$-operadores), podemos aplicar técnicas de Aprendizagem de Máquina para estimá-los. O formato que define quais vizinhos são usadas é chamado de janela. $W$-operadores treinados com janelas grandes frequentemente tem problemas de generalização, pois necessitam de grandes conjuntos de treinamento. Este problema é ainda mais grave ao treinar operadores em níveis de cinza. Apesar de técnicas como o projeto dois níveis, que combina a saída de diversos operadores treinados com janelas menores, mitigar em parte estes problemas, uma determinação de parâmetros complexa é necessária. Neste trabalho apresentamos duas técnicas que permitem o treinamento de operadores usando janelas grandes. A primeira, KA, é baseada em Máquinas de Suporte Vetorial (SVM) e utiliza técnicas de aproximação de kernels para realizar o treinamento de $W$-operadores. Uma escolha adequada de kernels permite o treinamento de operadores níveis de cinza e binários. A segunda técnica, NILC, permite a criação automática de combinações de operadores de imagens. Este método utiliza uma técnica de otimização específica para casos em que o número de características é muito grande. Ambos métodos obtiveram resultados competitivos com algoritmos da literatura em dois domínio de aplicação diferentes. O primeiro, Staff Removal, é um processamento de documentos binários frequente em sistemas de reconhecimento ótico de partituras. O segundo é um problema de segmentação de vasos sanguíneos em imagens em níveis de cinza.
APA, Harvard, Vancouver, ISO, and other styles
6

Alhawari, Omar I. "Operator Assignment Decisions in a Highly Dynamic Cellular Environment." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1221596218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schrödl, Stefan J. [Verfasser]. "Operator-valued Reproducing Kernels and Their Application in Approximation and Statistical Learning / Stefan J Schrödl." Aachen : Shaker, 2009. http://d-nb.info/1159835454/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wörmann, Julian [Verfasser], Martin [Akademischer Betreuer] Kleinsteuber, Martin [Gutachter] Kleinsteuber, and Walter [Gutachter] Stechele. "Structured Co-sparse Analysis Operator Learning for Inverse Problems in Imaging / Julian Wörmann ; Gutachter: Martin Kleinsteuber, Walter Stechele ; Betreuer: Martin Kleinsteuber." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1205069437/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tamascelli, Nicola. "A Machine Learning Approach to Predict Chattering Alarms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
The alarm system plays a vital role to grant safety and reliability in the process industry. Ideally, an alarm should inform the operator about critical conditions only; during alarm floods, the operator may be overwhelmed by several alarms in a short time span. Crucial alarms are more likely to be missed during these situations. Poor alarm management is one of the main causes of unintended plant shut down, incidents and near misses in the chemical industry. Most of the alarms triggered during a flood episode are nuisance alarms –i.e. alarms that do not communicate new information to the operator, or alarms that do not require an operator action. Chattering alarms –i.e. that repeat three or more times in a minute, and redundant alarms –i.e. duplicated alarms, are common forms of nuisance. Identifying nuisance alarms is a key step to improve the performance of the alarm system. Advanced techniques for alarm rationalization have been developed, proposing methods to quantify chattering, redundancy and correlation between alarms. Although very effective, these techniques produce static results. Machine Learning appears to be an interesting opportunity to retrieve further knowledge and support these techniques. This knowledge can be used to produce more flexible and dynamic models, as well as to predict alarm behaviour during floods. The aim of this study is to develop a machine learning-based algorithm for real-time alarm classification and rationalization, whose results can be used to support the operator decision-making procedure. Specifically, efforts have been directed towards chattering prediction during alarm floods. Advanced techniques for chattering, redundancy and correlation assessment have been performed on a real industrial alarm database. A modified approach has been developed to dynamically assess chattering, and the results have been used to train three different machine learning models, whose performance has been evaluated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Ji Hyun. "Development of a Tool to Assist the Nuclear Power Plant Operator in Declaring a State of Emergency Based on the Use of Dynamic Event Trees and Deep Learning Tools." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543069550674204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Syben, Christopher [Verfasser], Andreas [Akademischer Betreuer] Maier, Andreas [Gutachter] Maier, and Adam [Gutachter] Wang. "Known Operator Learning for a Hybrid Magnetic Resonance/X-ray Imaging Acquisition Scheme / Christopher Syben ; Gutachter: Andreas Maier, Adam Wang ; Betreuer: Andreas Maier." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1237499151/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sanguanpuak, T. (Tachporn). "Radio resource sharing with edge caching for multi-operator in large cellular networks." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526221564.

Full text
Abstract:
Abstract The aim of this thesis is to devise new paradigms on radio resource sharing including cache-enabled virtualized large cellular networks for mobile network operators (MNOs). Also, self-organizing resource allocation for small cell networks is considered. In such networks, the MNOs rent radio resources from the infrastructure provider (InP) to support their subscribers. In order to reduce the operational costs, while at the same time to significantly increase the usage of the existing network resources, it leads to a paradigm where the MNOs share their infrastructure, i.e., base stations (BSs), antennas, spectrum and edge cache among themselves. In this regard, we integrate the theoretical insights provided by stochastic geometrical approaches to model the spectrum and infrastructure sharing for large cellular networks. In the first part of the thesis, we study the non-orthogonal multi-MNO spectrum allocation problem for small cell networks with the goal of maximizing the overall network throughput, defined as the expected weighted sum rate of the MNOs. Each MNO is assumed to serve multiple small cell BSs (SBSs). We adopt the many-to-one stable matching game framework to tackle this problem. We also investigate the role of power allocation schemes for SBSs using Q-learning. In the second part, we model and analyze the infrastructure sharing system considering a single buyer MNO and multiple seller MNOs. The MNOs are assumed to operate over their own licensed spectrum bands while sharing BSs. We assume that multiple seller MNOs compete with each other to sell their infrastructure to a potential buyer MNO. The optimal strategy for the seller MNOs in terms of the fraction of infrastructure to be shared and the price of the infrastructure, is obtained by computing the equilibrium of a Cournot-Nash oligopoly game. Finally, we develop a game-theoretic framework to model and analyze a cache-enabled virtualized cellular networks where the network infrastructure, e.g., BSs and cache storage, owned by an InP, is rented and shared among multiple MNOs. We formulate a Stackelberg game model with the InP as the leader and the MNOs as the followers. The InP tries to maximize its profit by optimizing its infrastructure rental fee. The MNO aims to minimize the cost of infrastructure by minimizing the cache intensity under probabilistic delay constraint of the user (UE). Since the MNOs share their rented infrastructure, we apply a cooperative game concept, namely, the Shapley value, to divide the cost among the MNOs
Tiivistelmä Tämän väitöskirjan tavoitteena on tuottaa uusia paradigmoja radioresurssien jakoon, mukaan lukien virtualisoidut välimuisti-kykenevät suuret matkapuhelinverkot matkapuhelinoperaattoreille. Näiden kaltaisissa verkoissa operaattorit vuokraavat radioresursseja infrastruktuuritoimittajalta (InP, infrastructure provider) asiakkaiden tarpeisiin. Toimintakulujen karsiminen ja samanaikainen olemassa olevien verkkoresurssien hyötykäytön huomattava kasvattaminen johtaa paradigmaan, jossa operaattorit jakavat infrastruktuurinsa keskenään. Tämän vuoksi työssä tutkitaan teoreettisia stokastiseen geometriaan perustuvia malleja spektrin ja infrastruktuurin jakamiseksi suurissa soluverkoissa. Työn ensimmäisessä osassa tutkitaan ei-ortogonaalista monioperaattori-allokaatioongelmaa pienissä soluverkoissa tavoitteena maksimoida verkon yleistä läpisyöttöä, joka määritellään operaattoreiden painotettuna summaläpisyötön odotusarvona. Jokaisen operaattorin oletetaan palvelevan useampaa piensolutukiasemaa (SBS, small cell base station). Työssä käytetään monelta yhdelle -vakaata sovituspeli-viitekehystä SBS:lle käyttäen Q-oppimista. Työn toisessa osassa mallinnetaan ja analysoidaan infrastruktuurin jakamista yhden ostaja-operaattorin ja monen myyjä-operaattorin tapauksessa. Operaattorien oletetaan toimivan omilla lisensoiduilla taajuuksillaan jakaen tukiasemat keskenään. Myyjän optimaalinen strategia infrastruktuurin myytävän osan suuruuden ja hinnan suhteen saavutetaan laskemalla Cournot-Nash -olipologipelin tasapainotila. Lopuksi, työssä kehitetään peli-teoreettinen viitekehys virtualisoitujen välimuistikykenevien soluverkkojen mallintamiseen ja analysointiin, missä InP:n omistama verkkoinfrastruktuuri vuokrataan ja jaetaan monen operaattorin kesken. Työssä muodostetaan Stackelberg-pelimalli, jossa InP toimii johtajana ja operaattorit seuraajina. InP pyrkii maksimoimaan voittonsa optimoimalla infrastruktuurin vuokrahintaa. Operaattori pyrkii minimoimaan infrastruktuurin hinnan minimoimalla välimuistin tiheyttä satunnaisen käyttäjän viive-ehtojen mukaisesti. Koska operaattorit jakavat vuokratun infrastruktuurin, työssä käytetään yhteistyöpeli-ajatusta, nimellisesti, Shapleyn arvoa, jakamaan kustannuksia operaatoreiden kesken
APA, Harvard, Vancouver, ISO, and other styles
13

Laforgue, Pierre. "Deep kernel representation learning for complex data and reliability issues." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT006.

Full text
Abstract:
Cette thèse débute par l'étude d'architectures profondes à noyaux pour les données complexes. L'une des clefs du succès des algorithmes d'apprentissage profond est la capacité des réseaux de neurones à extraire des représentations pertinentes. Cependant, les raisons théoriques de ce succès nous sont encore largement inconnues, et ces approches sont presque exclusivement réservées aux données vectorielles. D'autre part, les méthodes à noyaux engendrent des espaces fonctionnels étudiés de longue date, les Espaces de Hilbert à Noyau Reproduisant (Reproducing Kernel Hilbert Spaces, RKHSs), dont la complexité est facilement contrôlée par le noyau ou la pénalisation, tout en autorisant les prédictions dans les espaces structurés complexes via les RKHSs à valeurs vectorielles (vv-RKHSs).L'architecture proposée consiste à remplacer les blocs élémentaires des réseaux usuels par des fonctions appartenant à des vv-RKHSs. Bien que très différents à première vue, les espaces fonctionnels ainsi définis sont en réalité très similaires, ne différant que par l'ordre dans lequel les fonctions linéaires/non-linéaires sont appliquées. En plus du contrôle théorique sur les couches, considérer des fonctions à noyau permet de traiter des données structurées, en entrée comme en sortie, étendant le champ d'application des réseaux aux données complexes. Nous conclurons cette partie en montrant que ces architectures admettent la plupart du temps une paramétrisation finie-dimensionnelle, ouvrant la voie à des méthodes d'optimisation efficaces pour une large gamme de fonctions de perte.La seconde partie de cette thèse étudie des alternatives à la moyenne empirique comme substitut de l'espérance dans le cadre de la Minimisation du Risque Empirique (Empirical Risk Minimization, ERM). En effet, l'ERM suppose de manière implicite que la moyenne empirique est un bon estimateur. Cependant, dans de nombreux cas pratiques (e.g. données à queue lourde, présence d'anomalies, biais de sélection), ce n'est pas le cas.La Médiane-des-Moyennes (Median-of-Means, MoM) est un estimateur robuste de l'espérance construit comme suit: des moyennes empiriques sont calculées sur des sous-échantillons disjoints de l'échantillon initial, puis est choisie la médiane de ces moyennes. Nous proposons et analysons deux extensions de MoM, via des sous-échantillons aléatoires et/ou pour les U-statistiques. Par construction, les estimateurs MoM présentent des propriétés de robustesse, qui sont exploitées plus avant pour la construction de méthodes d'apprentissage robustes. Il est ainsi prouvé que la minimisation d'un estimateur MoM (aléatoire) est robuste aux anomalies, tandis que les méthodes de tournoi MoM sont étendues au cas de l'apprentissage sur les paires.Enfin, nous proposons une méthode d'apprentissage permettant de résister au biais de sélection. Si les données d'entraînement proviennent d'échantillons biaisés, la connaissance des fonctions de biais permet une repondération non-triviale des observations, afin de construire un estimateur non biaisé du risque. Nous avons alors démontré des garanties non-asymptotiques vérifiées par les minimiseurs de ce dernier, tout en supportant empiriquement l'analyse
The first part of this thesis aims at exploring deep kernel architectures for complex data. One of the known keys to the success of deep learning algorithms is the ability of neural networks to extract meaningful internal representations. However, the theoretical understanding of why these compositional architectures are so successful remains limited, and deep approaches are almost restricted to vectorial data. On the other hand, kernel methods provide with functional spaces whose geometry are well studied and understood. Their complexity can be easily controlled, by the choice of kernel or penalization. In addition, vector-valued kernel methods can be used to predict kernelized data. It then allows to make predictions in complex structured spaces, as soon as a kernel can be defined on it.The deep kernel architecture we propose consists in replacing the basic neural mappings functions from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs). Although very different at first glance, the two functional spaces are actually very similar, and differ only by the order in which linear/nonlinear functions are applied. Apart from gaining understanding and theoretical control on layers, considering kernel mappings allows for dealing with structured data, both in input and output, broadening the applicability scope of networks. We finally expose works that ensure a finite dimensional parametrization of the model, opening the door to efficient optimization procedures for a wide range of losses.The second part of this thesis investigates alternatives to the sample mean as substitutes to the expectation in the Empirical Risk Minimization (ERM) paradigm. Indeed, ERM implicitly assumes that the empirical mean is a good estimate of the expectation. However, in many practical use cases (e.g. heavy-tailed distribution, presence of outliers, biased training data), this is not the case.The Median-of-Means (MoM) is a robust mean estimator constructed as follows: the original dataset is split into disjoint blocks, empirical means on each block are computed, and the median of these means is finally returned. We propose two extensions of MoM, both to randomized blocks and/or U-statistics, with provable guarantees. By construction, MoM-like estimators exhibit interesting robustness properties. This is further exploited by the design of robust learning strategies. The (randomized) MoM minimizers are shown to be robust to outliers, while MoM tournament procedure are extended to the pairwise setting.We close this thesis by proposing an ERM procedure tailored to the sample bias issue. If training data comes from several biased samples, computing blindly the empirical mean yields a biased estimate of the risk. Alternatively, from the knowledge of the biasing functions, it is possible to reweight observations so as to build an unbiased estimate of the test distribution. We have then derived non-asymptotic guarantees for the minimizers of the debiased risk estimate thus created. The soundness of the approach is also empirically endorsed
APA, Harvard, Vancouver, ISO, and other styles
14

Arthur, Richard B. "Vision-Based Human Directed Robot Guidance." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Grant, Timothy John. "Inductive learning of knowledge-based planning operators." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1996. http://arno.unimaas.nl/show.cgi?fid=6686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.

Full text
Abstract:
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images
This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
APA, Harvard, Vancouver, ISO, and other styles
17

Truong, Hoang Vinh. "Multi color space LBP-based feature selection for texture classification." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0468/document.

Full text
Abstract:
L'analyse de texture a été largement étudiée dans la littérature et une grande variété de descripteurs de texture ont été proposés. Parmi ceux-ci, les motifs binaires locaux (LBP) occupent une part importante dans la plupart des applications d'imagerie couleur ou de reconnaissance de formes et sont particulièrement exploités dans les problèmes d'analyse de texture. Généralement, les images couleurs acquises sont représentées dans l'espace colorimétrique RGB. Cependant, il existe de nombreux espaces couleur pour la classification des textures, chacun ayant des propriétés spécifiques qui impactent les performances. Afin d'éviter la difficulté de choisir un espace pertinent, la stratégie multi-espace couleur permet d'utiliser simultanémentles propriétés de plusieurs espaces. Toutefois, cette stratégie conduit à augmenter le nombre d'attributs, notamment lorsqu'ils sont extraits de LBP appliqués aux images couleur. Ce travail de recherche est donc axé sur la réduction de la dimension de l'espace d'attributs générés à partir de motifs binaires locaux par des méthodes de sélection d'attributs. Dans ce cadre, nous considérons l'histogramme des LBP pour la représentation des textures couleur et proposons des approches conjointes de sélection de bins et d'histogrammes multi-espace pour la classification supervisée de textures. Les nombreuses expériences menées sur des bases de référence de texture couleur, démontrent que les approches proposées peuvent améliorer les performances en classification comparées à l'état de l'art
Texture analysis has been extensively studied and a wide variety of description approaches have been proposed. Among them, Local Binary Pattern (LBP) takes an essential part of most of color image analysis and pattern recognition applications. Usually, devices acquire images and code them in the RBG color space. However, there are many color spaces for texture classification, each one having specific properties. In order to avoid the difficulty of choosing a relevant space, the multi color space strategy allows using the properties of several spaces simultaneously. However, this strategy leads to increase the number of features extracted from LBP applied to color images. This work is focused on the dimensionality reduction of LBP-based feature selection methods. In this framework, we consider the LBP histogram and bin selection approaches for supervised texture classification. Extensive experiments are conducted on several benchmark color texture databases. They demonstrate that the proposed approaches can improve the state-of-the-art results
APA, Harvard, Vancouver, ISO, and other styles
18

Sherif, Mohamed Ahmed Mohamed. "Automating Geospatial RDF Dataset Integration and Enrichment." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215708.

Full text
Abstract:
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
19

Kostiadis, Kostas. "Learning to co-operate in multi-agent systems." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kirke, Alexis John. "Learning and co-operation in mobile multi-robot systems." Thesis, University of Plymouth, 1997. http://hdl.handle.net/10026.1/1984.

Full text
Abstract:
This thesis addresses the problem of setting the balance between exploration and exploitation in teams of learning robots who exchange information. Specifically it looks at groups of robots whose tasks include moving between salient points in the environment. To deal with unknown and dynamic environments,such robots need to be able to discover and learn the routes between these points themselves. A natural extension of this scenario is to allow the robots to exchange learned routes so that only one robot needs to learn a route for the whole team to use that route. One contribution of this thesis is to identify a dilemma created by this extension: that once one robot has learned a route between two points, all other robots will follow that route without looking for shorter versions. This trade-off will be labeled the Distributed Exploration vs. Exploitation Dilemma, since increasing distributed exploitation (allowing robots to exchange more routes) means decreasing distributed exploration (reducing robots ability to learn new versions of routes), and vice-versa. At different times, teams may be required with different balances of exploitation and exploration. The main contribution of this thesis is to present a system for setting the balance between exploration and exploitation in a group of robots. This system is demonstrated through experiments involving simulated robot teams. The experiments show that increasing and decreasing the value of a parameter of the novel system will lead to a significant increase and decrease respectively in average exploitation (and an equivalent decrease and increase in average exploration) over a series of team missions. A further set of experiments show that this holds true for a range of team sizes and numbers of goals.
APA, Harvard, Vancouver, ISO, and other styles
21

Hammond, Alec Michael. "Machine Learning Methods for Nanophotonic Design, Simulation, and Operation." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7131.

Full text
Abstract:
Interest in nanophotonics continues to grow as integrated optics provides an affordable platform for areas like telecommunications, quantum information processing, and biosensing. Designing and characterizing integrated photonics components and circuits, however, remains a major bottleneck. This is especially true when complex circuits or devices are required to study a particular phenomenon.To address this challenge, this work develops and experimentally validates a novel machine learning design framework for nanophotonic devices that is both practical and intuitive. As case studies, artificial neural networks are trained to model strip waveguides, integrated chirped Bragg gratings, and microring resonators using a small number of simple input and output parameters relevant to designers. Once trained, the models significantly decrease the computational cost relative to traditional design methodologies. To illustrate the power of the new design paradigm, both forward and inverse design tools enabled by the new design paradigm are demonstrated. These tools are directly used to design and fabricate several integrated Bragg grating devices and ring resonator filters. The method's predictions match the experimental measurements well and do not require any post-fabrication training adjustments.
APA, Harvard, Vancouver, ISO, and other styles
22

Haque, Ashraful. "A Deep Learning-based Dynamic Demand Response Framework." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104927.

Full text
Abstract:
The electric power grid is evolving in terms of generation, transmission and distribution network architecture. On the generation side, distributed energy resources (DER) are participating at a much larger scale. Transmission and distribution networks are transforming to a decentralized architecture from a centralized one. Residential and commercial buildings are now considered as active elements of the electric grid which can participate in grid operation through applications such as the Demand Response (DR). DR is an application through which electric power consumption during the peak demand periods can be curtailed. DR applications ensure an economic and stable operation of the electric grid by eliminating grid stress conditions. In addition to that, DR can be utilized as a mechanism to increase the participation of green electricity in an electric grid. The DR applications, in general, are passive in nature. During the peak demand periods, common practice is to shut down the operation of pre-selected electrical equipment i.e., heating, ventilation and air conditioning (HVAC) and lights to reduce power consumption. This approach, however, is not optimal and does not take into consideration any user preference. Furthermore, this does not provide any information related to demand flexibility beforehand. Under the broad concept of grid modernization, the focus is now on the applications of data analytics in grid operation to ensure an economic, stable and resilient operation of the electric grid. The work presented here utilizes data analytics in DR application that will transform the DR application from a static, look-up-based reactive function to a dynamic, context-aware proactive solution. The dynamic demand response framework presented in this dissertation performs three major functionalities: electrical load forecast, electrical load disaggregation and peak load reduction during DR periods. The building-level electrical load forecasting quantifies required peak load reduction during DR periods. The electrical load disaggregation provides equipment-level power consumption. This will quantify the available building-level demand flexibility. The peak load reduction methodology provides optimal HVAC setpoint and brightness during DR periods to reduce the peak demand of a building. The control scheme takes user preference and context into consideration. A detailed methodology with relevant case studies regarding the design process of the network architecture of a deep learning algorithm for electrical load forecasting and load disaggregation is presented. A case study regarding peak load reduction through HVAC setpoint and brightness adjustment is also presented. To ensure the scalability and interoperability of the proposed framework, a layer-based software architecture to replicate the framework within a cloud environment is demonstrated.
Doctor of Philosophy
The modern power grid, known as the smart grid, is transforming how electricity is generated, transmitted and distributed across the US. In a legacy power grid, the utilities are the suppliers and the residential or commercial buildings are the consumers of electricity. However, the smart grid considers these buildings as active grid elements which can contribute to the economic, stable and resilient operation of an electric grid. Demand Response (DR) is a grid application that reduces electrical power consumption during peak demand periods. The objective of DR application is to reduce stress conditions of the electric grid. The current DR practice is to shut down pre-selected electrical equipment i.e., HVAC, lights during peak demand periods. However, this approach is static, pre-fixed and does not consider any consumer preference. The proposed framework in this dissertation transforms the DR application from a look-up-based function to a dynamic context-aware solution. The proposed dynamic demand response framework performs three major functionalities: electrical load forecasting, electrical load disaggregation and peak load reduction. The electrical load forecasting quantifies building-level power consumption that needs to be curtailed during the DR periods. The electrical load disaggregation quantifies demand flexibility through equipment-level power consumption disaggregation. The peak load reduction methodology provides actionable intelligence that can be utilized to reduce the peak demand during DR periods. The work leverages functionalities of a deep learning algorithm to increase forecasting accuracy. An interoperable and scalable software implementation is presented to allow integration of the framework with existing energy management systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Bisen, Pradeep Siddhartha Singh. "Predicting Operator’s Choice During Airline Disruption Using Machine Learning Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18839.

Full text
Abstract:
This master thesis is a collaboration with Jeppesen, a Boeing company to attempt applying machine learning techniques to predict “When does Operator manually solve the disruption? If he chooses to use Optimiser, then which option would he choose? And why?”. Through the course of this project, various techniques are employed to study, analyze and understand the historical labeled data of airline consisting of alerts during disruptions and tries to classify each data point into one of the categories: manual or optimizer option. This is done using various supervised machine learning classification methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Turunen, Maria. "Learning an operatic role effectively : A case study of learning an operatic role in a time-saving way both vocally and on stage." Thesis, Stockholms konstnärliga högskola, Institutionen för opera, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uniarts:diva-886.

Full text
Abstract:
This study examines questions about the most effective ways how to learn an operatic role in a relatively short time. The study consists of my own experiences and observations as I was learning the stage cover role of Donna Elvira for the Finnish National Opera in March 2020.  I am reflecting my observations of the learning process towards a theoretical framework that is based on literature and interviews of the director Jussi Nikkilä and assistant director Anna Kelo.
APA, Harvard, Vancouver, ISO, and other styles
25

Hedman, Erik. "Data for Machine Learning : Data generation and simulation of a logistics operation for machine learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67952.

Full text
Abstract:
In the logistics business, a priority is to deliver packages at the right time in the right place. Mistakescan happen in any task that a human makes a decision. In this project, a simulation is developed of alogistics operation, used to generate data for machine learning algorithms. This project is one part ofa bigger project. The algorithm will be trained to discover abnormalities in the flow of packages, withthe goal to reduce the amount of wrongfully handled packages. Machine learning algorithms andtraining is parts of the bigger project and will not be covered in this paper. This project was broughtforth by IT-consulting Company Data Ductus.
APA, Harvard, Vancouver, ISO, and other styles
26

Edman, Anneli. "Combining Knowledge Systems and Hypermedia for User Co-operation and Learning." Doctoral thesis, Uppsala : Dept. of Information Science [Institutionen för informationsvetenskap], Univ, 2001. http://publications.uu.se/theses/91-506-1526-2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hernández, Jiménez Enric. "Uncertainty and indistinguishability. Application to modelling with words." Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6648.

Full text
Abstract:
El concepte d'igualtat és fonamental en qualsevol teoria donat que és una noció essencial a l'hora de discernir entre els elements objecte del seu estudi i possibilitar la definició de mecanismes de classificació.

Quan totes les propietats són perfectament precises (absència d'incertesa), hom obtè la igualtat clàssica a on dos objectes són considerats iguals si i només si comparteixen el mateix conjunt de propietats. Però, què passa quan considerem l'aparició d'incertesa, com en el cas a on els objectes compleixen una determinada propietat només fins a un cert grau?. Llavors, donat que alguns objectes seran més similars entre si que d'altres, sorgeix la necessitat de una noció gradual del concepte d'igualtat.

Aquestes consideracions refermen la idea de que certs contextos requereixen una definició més flexible, que superi la rigidesa de la noció clàssica d'igualtat. Els operadors de T-indistingibilitat semblen bons candidats per aquest nou tipus d'igualtat que cerquem.

D'altra banda, La Teoria de l'Evidència de Dempster-Shafer, com a marc pel tractament d'evidències, defineix implícitament una noció d'indistingibilitat entre els elements del domini de discurs basada en la seva compatibilitat relativa amb l'evidència considerada. El capítol segon analitza diferents mètodes per definir l'operador de T-indistingibilitat associat a una evidència donada.

En el capítol tercer, després de presentar un exhaustiu estat de l'art en mesures d'incertesa, ens centrem en la qüestió del còmput de l'entropia quan sobre els elements del domini s'ha definit una relació d'indistingibilitat. Llavors, l'entropia hauria de ser mesurada no en funció de l'ocurrència d'events diferents, sinó d'acord amb la variabilitat percebuda per un observador equipat amb la relació d'indistingibilitat considerada. Aquesta interpretació suggereix el "paradigma de l'observador" que ens porta a la introducció del concepte d'entropia observacional.

La incertesa és un fenomen present al món real. El desenvolupament de tècniques que en permetin el tractament és doncs, una necessitat. La 'computació amb paraules' ('computing with words') pretén assolir aquest objectiu mitjançant un formalisme basat en etiquetes lingüístiques, en contrast amb els mètodes numèrics tradicionals. L'ús d'aquestes etiquetes millora la comprensibilitat del llenguatge de representació del
coneixement, a l'hora que requereix una adaptació de les tècniques inductives tradicionals.

En el quart capítol s'introdueix un nou tipus d'arbre de decisió que incorpora les indistingibilitats entre elements del domini a l'hora de calcular la impuresa dels nodes. Hem anomenat arbres de decisió observacionals a aquests nou tipus, donat que es basen en la incorporació de l'entropia observacional en la funció heurística de selecció d'atributs. A més, presentem un algorisme capaç d'induir regles lingüístiques mitjançant un tractament adient de la incertesa present a les etiquetes lingüístiques o a les dades mateixes. La definició de l'algorisme s'acompanya d'una comparació formal amb altres algorismes estàndards.
The concept of equality is a fundamental notion in any theory since it is essential to the ability of discerning the objects to whom it concerns, ability which in turn is a requirement for any classification mechanism that might be defined.

When all the properties involved are entirely precise, what we obtain is the classical equality, where two individuals are considered equal if and only if they share the same set of properties. What happens, however, when imprecision arises as in the case of properties which are fulfilled only up to a degree? Then, because certain individuals will be more similar than others, the need for a gradual notion of equality arises.

These considerations show that certain contexts that are pervaded with uncertainty require a more flexible concept of equality that goes beyond the rigidity of the classic concept of equality. T-indistinguishability operators seem to be good candidates for this more flexible and general version of the concept of equality that we are searching for.

On the other hand, Dempster-Shafer Theory of Evidence, as a framework for representing and managing general evidences, implicitly conveys the notion of indistinguishability between the elements of the domain of discourse based on their relative compatibility with the evidence at hand. In chapter two we are concerned with providing definitions for the T-indistinguishability operator associated to a given body of evidence.

In chapter three, after providing a comprehensive summary of the state of the art on measures of uncertainty, we tackle the problem of computing entropy when an indistinguishability relation has been defined over the elements of the domain. Entropy should then be measured not according to the occurrence of different events, but according to the variability perceived by an observer equipped with indistinguishability abilities as defined by the indistinguishability relation considered. This idea naturally leads to the introduction of the concept of observational entropy.

Real data is often pervaded with uncertainty so that devising techniques intended to induce knowledge in the presence of uncertainty seems entirely advisable.
The paradigm of computing with words follows this line in order to provide a computation formalism based on linguistic labels in contrast to traditional numerical-based methods.
The use of linguistic labels enriches the understandability of the representation language, although it also requires adapting the classical inductive learning procedures to cope with such labels.

In chapter four, a novel approach to building decision trees is introduced, addressing the case when uncertainty arises as a consequence of considering a more realistic setting in which decision maker's discernment abilities are taken into account when computing node's impurity measures. This novel paradigm results in what have been called --observational decision trees' since the main idea stems from the notion of observational entropy in order to incorporate indistinguishability concerns.
In addition, we present an algorithm intended to induce linguistic rules from data by properly managing the uncertainty present either in the set of describing labels or in the data itself. A formal comparison with standard algorithms is also provided.
APA, Harvard, Vancouver, ISO, and other styles
28

Meyer, Ann Elizabeth. "Effects of experience and task inconsistency : a study of novice and expert cash register operators." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/29369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kaylani, Assem. "AN ADAPTIVE MULTIOBJECTIVE EVOLUTIONARY APPROACH TO OPTIMIZE ARTMAP NEURAL NETWORKS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2538.

Full text
Abstract:
This dissertation deals with the evolutionary optimization of ART neural network architectures. ART (adaptive resonance theory) was introduced by a Grossberg in 1976. In the last 20 years (1987-2007) a number of ART neural network architectures were introduced into the literature (Fuzzy ARTMAP (1992), Gaussian ARTMAP (1996 and 1997) and Ellipsoidal ARTMAP (2001)). In this dissertation, we focus on the evolutionary optimization of ART neural network architectures with the intent of optimizing the size and the generalization performance of the ART neural network. A number of researchers have focused on the evolutionary optimization of neural networks, but no research has been performed on the evolutionary optimization of ART neural networks, prior to 2006, when Daraiseh has used evolutionary techniques for the optimization of ART structures. This dissertation extends in many ways and expands in different directions the evolution of ART architectures, such as: (a) uses a multi-objective optimization of ART structures, thus providing to the user multiple solutions (ART networks) with varying degrees of merit, instead of a single solution (b) uses GA parameters that are adaptively determined throughout the ART evolution, (c) identifies a proper size of the validation set used to calculate the fitness function needed for ART's evolution, thus speeding up the evolutionary process, (d) produces experimental results that demonstrate the evolved ART's effectiveness (good accuracy and small size) and efficiency (speed) compared with other competitive ART structures, as well as other classifiers (CART (Classification and Regression Trees) and SVM (Support Vector Machines)). The overall methodology to evolve ART using a multi-objective approach, the chromosome representation of an ART neural network, the genetic operators used in ART's evolution, and the automatic adaptation of some of the GA parameters in ART's evolution could also be applied in the evolution of other exemplar based neural network classifiers such as the probabilistic neural network and the radial basis function neural network.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
30

Lehmann, Jens. "Learning OWL Class Expressions." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38351.

Full text
Abstract:
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
APA, Harvard, Vancouver, ISO, and other styles
31

Jin, Lei. "The Impact of Co-operation Policies on Participation in Online Learning Object Exchange: A Preliminary Investigation." Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/869.

Full text
Abstract:
This research investigates the impact of cooperation policies on participation in, and benefits from, online learning object exchanges. First, an in-depth study of issues encountered in other online contexts (peer-to-peer systems, discussion group with lurkers, reputation systems) provided evidence that explicit cooperation policies and motivation techniques could bring benefits to online object exchanges. A case study is presented based on the comparison between two peer-to-peer systems, Mojo Nation and Gnutella, to show how cooperative policies could add value to online communities. This case study highlights several issues, such as the algorithm of pricing/exchange mechanism. Successfully solving these issues will be the key to identifying the benefits of an e-marketplace based online object exchange. An outline of an experimental exchange mechanism is presented, along with a prototype interface for users. To investigate further issues for users, an online scenario-based questionnaire was set up to measure potential users' attitudes towards cooperation policies. The detailed analysis on questionnaire results shows that cooperation policies hold promise to make the online object exchange more efficient. The results also illustrated how a transaction-based community could achieve the following benefits: increase of ROI object value discovery faster repository expansion better motivation through reputation recognition
APA, Harvard, Vancouver, ISO, and other styles
32

Benadict, Rajasegaram Annet. "The application of post-project reviews in events management by cultural operators." Thesis, Umeå universitet, Företagsekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-118291.

Full text
Abstract:
Organisations have evidently shifted towards the projectification of their activities and operations across the world and across industries by which project management is not only limited to construction and engineering projects anymore. The projectification has shed light on the amount of project success and failure in which both have been noted to have a steep difference between each other. Whilst many factors have been discovered to be a trigger of failure or success one emerging subject that has been gaining attention across management institutions and organisationsis the integration of knowledge management principlesinto the closure stage of a project, by which the term post-project review awakens. Post-project reviewsreceive a lot of attention and strong suggestion from textbooks and other academic literature, however it was found that its application was not as effective as is suggested by the literature. Literature also indicated that cultural operators within the events management have progressively applied project management tools and techniques. At the same time there is debate concerning theproject management rationale, which collide with the prime principles of art. Here art presents itself as the core focus pointforcultural operators. In the light of this argument the author started researching the subject of Post-project reviews within the events management industryand found that the subject has been scarcely researched overall, in the events management sector and especially in the cultural branch, hence the author had identified a research gap. Consequently, this research intends to explore the application of post-project reviews by cultural operators within the events management industry. The study employed a qualitative research design in which semi-structured interviews were conducted across three different organisational size segments; micro, small and medium. The organisational size was determined with the amount of employees per organisation; each size segment had two representatives in which all of the respondents ran a non-profit organisation.The research revealed that medium organisations employed the most formal manner of a PPR by which PPR’s are considered on a strategic level whilst micro organisations still used a simplerecord and report principle, in which none of the recorded numbers were formally analysed. At the same time, the comprehensiveness of a PPR was very much dependent on the size of the project, which denoted on the amount of funding, and external stakeholders there was involved.
APA, Harvard, Vancouver, ISO, and other styles
33

Salim, Adil. "Random monotone operators and application to stochastic optimization." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT021/document.

Full text
Abstract:
Cette thèse porte essentiellement sur l'étude d'algorithmes d'optimisation. Les problèmes de programmation intervenant en apprentissage automatique ou en traitement du signal sont dans beaucoup de cas composites, c'est-à-dire qu'ils sont contraints ou régularisés par des termes non lisses. Les méthodes proximales sont une classe d'algorithmes très efficaces pour résoudre de tels problèmes. Cependant, dans les applications modernes de sciences des données, les fonctions à minimiser se représentent souvent comme une espérance mathématique, difficile ou impossible à évaluer. C'est le cas dans les problèmes d'apprentissage en ligne, dans les problèmes mettant en jeu un grand nombre de données ou dans les problèmes de calcul distribué. Pour résoudre ceux-ci, nous étudions dans cette thèse des méthodes proximales stochastiques, qui adaptent les algorithmes proximaux aux cas de fonctions écrites comme une espérance. Les méthodes proximales stochastiques sont d'abord étudiées à pas constant, en utilisant des techniques d'approximation stochastique. Plus précisément, la méthode de l'Equation Differentielle Ordinaire est adaptée au cas d'inclusions differentielles. Afin d'établir le comportement asymptotique des algorithmes, la stabilité des suites d'itérés (vues comme des chaines de Markov) est étudiée. Ensuite, des généralisations de l'algorithme du gradient proximal stochastique à pas décroissant sont mises au point pour resoudre des problèmes composites. Toutes les grandeurs qui permettent de décrire les problèmes à résoudre s'écrivent comme une espérance. Cela inclut un algorithme primal dual pour des problèmes régularisés et linéairement contraints ainsi qu'un algorithme d'optimisation sur les grands graphes
This thesis mainly studies optimization algorithms. Programming problems arising in signal processing and machine learning are composite in many cases, i.e they exhibit constraints and non smooth regularization terms. Proximal methods are known to be efficient to solve such problems. However, in modern applications of data sciences, functions to be minimized are often represented as statistical expectations, whose evaluation is intractable. This cover the case of online learning, big data problems and distributed computation problems. To solve this problems, we study in this thesis proximal stochastic methods, that generalize proximal algorithms to the case of cost functions written as expectations. Stochastic proximal methods are first studied with a constant step size, using stochastic approximation techniques. More precisely, the Ordinary Differential Equation method is adapted to the case of differential inclusions. In order to study the asymptotic behavior of the algorithms, the stability of the sequences of iterates (seen as Markov chains) is studied. Then, generalizations of the stochastic proximal gradient algorithm with decreasing step sizes are designed to solve composite problems. Every quantities used to define the optimization problem are written as expectations. This include a primal dual algorithm to solve regularized and linearly constrained problems and an optimization over large graphs algorithm
APA, Harvard, Vancouver, ISO, and other styles
34

Katzenbach, Michael. "Individual Approaches in Rich Learning Situations Material-based Learning with Pinboards." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80328.

Full text
Abstract:
Active Approaches provide chances for individual, comprehension-oriented learning and can facilitate the acquirement of general mathematical competencies. Using the example of pinboards, which were developed for different areas of the secondary level, workshop participants experience, discuss and further develop learning tasks, which can be used for free activities, for material based concept formation, for coping with heterogeneity, for intelligent exercises, as tool for the presentation of students’ work and as basis for games. The material also allows some continuous movements and can thus prepare an insightful usage of dynamic geometry programs. Central Part of the workshop is a work-sharing group work with learning tasks for grades 5 to 8. The workshop will close with a discussion of general aspects of material-based learning.
APA, Harvard, Vancouver, ISO, and other styles
35

Todd, Nicole Ann. "Support teachers, learning difficulties and secondary school culture." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/45779/1/Nicole_Todd_Thesis.pdf.

Full text
Abstract:
This thesis will report on mixed method research which examined secondary Support Teachers Learning Difficulties (STLDs) and their modes of operation in New South Wales (NSW) government schools, Australia. Four modes of operation were identified in the literature as consultancy, team teaching, in-class support and withdrawal. An additional area of other duties was also included to examine the time when STLDs were not functioning in the four identified modes of operation. NSW government policy is in keeping with the literature as it recommends that STLDs should spend the majority of their time in consultancy and team teaching while in class with a minimum of withdrawal of students from their main classrooms for individual or small group instruction. STLDs, however, did not appear to be functioning in the recommended way. A number of factors identified in the literature, which may influence the modes of operation, can be grouped under the heading of school culture thus this research involved the examination of the effects of school culture on the modes of operation with the aim of expanding our understanding of the functioning of STLDs and providing suggestions for improvement. The theoretical base of social constructionism has informed this research which included survey and case study methods. Case studies of the STLDs in three secondary schools led to the conclusion that, in conjunction with factors such as flexibility and commitment, the involvement of the STLD in a sub-culture of learning support may lead to functioning in the recommended modes of operation.
APA, Harvard, Vancouver, ISO, and other styles
36

Fattic, Jana R. "Determining the Viability of a Hybrid Experiential and Distance Learning Educational Model for Water Treatment Plant Operators in Kentucky." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1082.

Full text
Abstract:
Drinking water and wastewater industries are facing a nationwide workforce shortfall of qualified treatment plant operators due to factors including the en masse retirement of baby boomers and the tightening of regulatory requirements regarding the hands-on experience required prior to licensure. Rural areas are hardest hit due to the lack of educational and experiential opportunities available to them within a reasonable proximity. Using a variety of demographic and industry data, a geographic analysis of Kentucky was conducted to assess the viability of the traditional classroom delivery model versus a hybrid experiential and distance learning educational model (HEDLEM). Although this analysis indicates that population density is the dominant indicator for most of the parameters used in this study, the bulk of the workforce needs in the state are distributed throughout rural areas with lower population densities. While the number and geographic distribution of community colleges in the state would appear to support the viability of campus-based workforce development programs, this study demonstrates the limitations of this model in addressing the needs of the water and wastewater workforce, where a significant workplace-associated experiential requirement exists. This limitation is exaggerated in rural areas, which have a demonstrated statewide need. This study indicates that a sufficient recruitment pool exists for the program based on the anticipated
APA, Harvard, Vancouver, ISO, and other styles
37

Tamaddoni, Nezhad Alireza. "Logic-based machine learning using a bounded hypothesis space : the lattice structure, refinement operators and a genetic algorithm approach." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/29849.

Full text
Abstract:
Rich representation inherited from computational logic makes logic-based machine learning a competent method for application domains involving relational background knowledge and structured data. There is however a trade-off between the expressive power of the representation and the computational costs. Inductive Logic Programming (ILP) systems employ different kind of biases and heuristics to cope with the complexity of the search, which otherwise is intractable. Searching the hypothesis space bounded below by a bottom clause is the basis of several state-of-the-art ILP systems (e.g. Progol and Aleph). However, the structure of the search space and the properties of the refinement operators for theses systems have not been previously characterised. The contributions of this thesis can be summarised as follows: (i) characterising the properties, structure and morphisms of bounded subsumption lattice (ii) analysis of bounded refinement operators and stochastic refinement and (iii) implementation and empirical evaluation of stochastic search algorithms and in particular a Genetic Algorithm (GA) approach for bounded subsumption. In this thesis we introduce the concept of bounded subsumption and study the lattice and cover structure of bounded subsumption. We show the morphisms between the lattice of bounded subsumption, an atomic lattice and the lattice of partitions. We also show that ideal refinement operators exist for bounded subsumption and that, by contrast with general subsumption, efficient least and minimal generalisation operators can be designed for bounded subsumption. In this thesis we also show how refinement operators can be adapted for a stochastic search and give an analysis of refinement operators within the framework of stochastic refinement search. We also discuss genetic search for learning first-order clauses and describe a framework for genetic and stochastic refinement search for bounded subsumption. on. Finally, ILP algorithms and implementations which are based on this framework are described and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
38

Jin, Lei. "The impact of co-operation policies on participation in online learning objective exchange a preliminary investigation /." Waterloo, Ont. : University of Waterloo, [Dept. of Management Sciences], 2002. http://etd.uwaterloo.ca/etd/ljin2002.pdf.

Full text
Abstract:
Thesis (M.A.Sc.) - University of Waterloo, 2002.
"A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Applied Science in Management Sciences". Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Hassan, Mohamed Elhafiz. "Power Plant Operation Optimization : Unit Commitment of Combined Cycle Power Plants Using Machine Learning and MILP." Thesis, mohamed-ahmed@siemens.com, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395304.

Full text
Abstract:
In modern days electric power systems, the penetration of renewable resources and the introduction of free market principles have led to new challenges facing the power producers and regulators. Renewable production is intermittent which leads to fluctuations in the grid and requires more control for regulators, and the free market principle raises the challenge for power plant producers to operate their plants in the most profitable way given the fluctuating prices. Those problems are addressed in the literature as the Economic Dispatch, and they have been discussed from both regulator and producer view points. Combined Cycle Power plants have the privileges of being dispatchable very fast and with low cost which put them as a primary solution to power disturbance in grid, this fast dispatch-ability also allows them to exploit price changes very efficiently to maximize their profit, and this sheds the light on the importance of prices forecasting as an input for the profit optimization of power plants. In this project, an integrated solution is introduced to optimize the dispatch of combined cycle power plants that are bidding for electricity markets, the solution is composed of two models, the forecasting model and the optimization model. The forecasting model is flexible enough to forecast electricity and fuel prices for different markets and with different forecasting horizons. Machine learning algorithms were used to build and validate the model, and data from different countries were used to test the model. The optimization model incorporates the forecasting model outputs as inputs parameters, and uses other parameters and constraints from the operating conditions of the power plant as well as the market in which the plant is selling. The power plant in this mode is assumed to satisfy different demands, each of these demands have corresponding electricity price and cost of energy not served. The model decides which units to be dispatched at each time stamp to give out the maximum profit given all these constraints, it also decides whether to satisfy all the demands or not producing part of each of them.
APA, Harvard, Vancouver, ISO, and other styles
40

Thaibah, Hilal. "Managing a Hybrid Oral Medication Distribution System in a Pediatric Hospital: A Machine Learning Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1626356839363113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Llofriu, Alonso Martin I. "Multi-Scale Spatial Cognition Models and Bio-Inspired Robot Navigation." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6888.

Full text
Abstract:
The rodent navigation system has been the focus of study for over a century. Discoveries made lately have provided insight on the inner workings of this system. Since then, computational approaches have been used to test hypothesis, as well as to improve robotics navigation and learning by taking inspiration on the rodent navigation system. This dissertation focuses on the study of the multi-scale representation of the rat’s current location found in the rat hippocampus. It first introduces a model that uses these different scales in the Morris maze task to show their advantages. The generalization power of larger scales of representation are shown to allow for the learning of more coherent and complete policies faster. Based on this model, a robotics navigation learning system is presented and compared to an existing algorithm on the taxi driver problem. The algorithm outperforms a canonical Q-Learning algorithm, learning the task faster. It is also shown to work in a continuous environment, making it suitable for a real robotics application. A novel task is also introduced and modeled, with the aim of providing further insight to an ongoing discussion over the involvement of the temporal portion of the hippocampus in navigation. The model is able to reproduce the results obtained with real rats and generates a set of empirically verifiable predictions. Finally, a novel multi-query path planning system is introduced, inspired in the way rodents represent location, their way of storing a topological model of the environment and how they use it to plan future routes. The algorithm is able to improve the routes in the second run, without disrupting the robustness of the underlying navigation system.
APA, Harvard, Vancouver, ISO, and other styles
42

Hartley, Sally Ann. "Learning for development through co-operation : the engagement of youth with co-operatives in Lesotho and Uganda." Thesis, Open University, 2012. http://oro.open.ac.uk/54667/.

Full text
Abstract:
With 2011 designated the United Nations Year of the Youth and 2012 the Year of the Co-operative this research contributes to issues raised by these two significant and timely events. The renaissance of co-operatives globally and their revival in countries in Africa has promoted interest and debate around co-operatives as collective values-based businesses and their potential to promote economic and social development and address poverty (UN, 2010). There is also increasing recognition that youth in Africa present both a potential and a challenge for development (UN, 2011). Youth access to education, civic participation and the ability to secure and sustain livelihoods are core concerns. Initiatives to involve youth in co-operatives in Lesotho and Uganda bring these two areas together and are of particular interest. The focus of this thesis is whether and how co-operatives provide opportunities for youth learning and the development of their capabilities and agency to achieve valued goals. The analysis is framed through conceptualisation of co-operatives as learning spaces within which theories of situated , learning are combined with the capability approach. Using qualitative and participatory methods to investigate youth engagement in co-operatives in Lesotho and Uganda the thesis argues that co- operatives provide situated social learning spaces where youth learn for development. Learning emerges within such spaces for: business and vocational knowledge and skills, personal development, collective learning based on trust and co-operator identity, and wider outcomes such as community engagement, enhanced relationships and networks and development of the co- operative form. Learning is, however, both enabled and restricted by: gender, the level of prior formal education, the networks of which a co-operative is a part and the type and success of a co-operative.
APA, Harvard, Vancouver, ISO, and other styles
43

Ticlavilca, Andres M. "Multivariate Bayesian Machine Learning Regression for Operation and Management of Multiple Reservoir, Irrigation Canal, and River Systems." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/600.

Full text
Abstract:
The principal objective of this dissertation is to develop Bayesian machine learning models for multiple reservoir, irrigation canal, and river system operation and management. These types of models are derived from the emerging area of machine learning theory; they are characterized by their ability to capture the underlying physics of the system simply by examination of the measured system inputs and outputs. They can be used to provide probabilistic predictions of system behavior using only historical data. The models were developed in the form of a multivariate relevance vector machine (MVRVM) that is based on a sparse Bayesian learning machine approach for regression. Using this Bayesian approach, a predictive confidence interval is obtained from the model that captures the uncertainty of both the model and the data. The models were applied to the multiple reservoir, canal and river system located in the regulated Lower Sevier River Basin in Utah. The models were developed to perform predictions of multi-time-ahead releases of multiple reservoirs, diversions of multiple canals, and streamflow and water loss/gain in a river system. This research represents the first attempt to use a multivariate Bayesian learning regression approach to develop simultaneous multi-step-ahead predictions with predictive confidence intervals for multiple outputs in a regulated river basin system. These predictions will be of potential value to reservoir and canal operators in identifying the best decisions for operation and management of irrigation water supply systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Tucker, Mark Alan. "The influence of social-learning factors on farm operators' perceptions of agricultural-chemical risk in the Ohio Darby Creek hydrologic unit." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1239623499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tucker, Mark A. "The influence of social-learning factors on farm operators' perceptions of agricultural-chemical risk in the Ohio Darby Creek hydrologic unit /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487929230739381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Full text
Abstract:
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
APA, Harvard, Vancouver, ISO, and other styles
47

Hogsholm, Robin Wagner. "Impact of a goal setting procedure on the work performance of young adults with behavioral/emotional/learning challenges." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xiang, Yusheng, and Marcus Geimer. "Optimization of operation strategy for primary torque based hydrostatics drivetrain using artificial intelligence." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71073.

Full text
Abstract:
A new primary torque control concept for hydrostatics mobile machines was introduced in 2018 [1]. The mentioned concept controls the pressure in a closed circuit by changing the angle of the hydraulic pump to achieve the desired pressure based on a feedback system. Thanks to this concept, a series of advantages are expected [2]. However, while working in a Y cycle, the primary torque controlled wheel loader has worse performance in efficiency compared to secondary controlled earthmover due to lack of recuperation ability. Alternatively, we use deep learning algorithms to improve machines’ regeneration performance. In this paper, we firstly make a potential analysis to show the benefit by utilizing the regeneration process, followed by proposing a series of CRDNNs, which combine CNN, RNN, and DNN, to precisely detect Y cycles. Compared to existing algorithms, the CRDNN with bidirectional LSTMs has the best accuracy, and the CRDNN with LSTMs has a comparable performance but much fewer training parameters. Based on our dataset including 119 truck loading cycles, our best neural network shows a 98.2 % test accuracy. Therefore, even with a simple regeneration process, our algorithm can improve the holistic efficiency of mobile machines up to 9% during Y cycle processes if primary torque concept is used.
APA, Harvard, Vancouver, ISO, and other styles
49

Belkin, Mikhail. "Problems of learning on manifolds /." 2003. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3097083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

"Fast Graph Laplacian regularized kernel learning via semidefinite-quadratic-linear programming." 2011. http://library.cuhk.edu.hk/record=b5894621.

Full text
Abstract:
Wu, Xiaoming.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 30-34).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Preliminaries --- p.4
Chapter 2.1 --- Kernel Learning Theory --- p.4
Chapter 2.1.1 --- Positive Semidefinite Kernel --- p.4
Chapter 2.1.2 --- The Reproducing Kernel Map --- p.6
Chapter 2.1.3 --- Kernel Tricks --- p.7
Chapter 2.2 --- Spectral Graph Theory --- p.8
Chapter 2.2.1 --- Graph Laplacian --- p.8
Chapter 2.2.2 --- Eigenvectors of Graph Laplacian --- p.9
Chapter 2.3 --- Convex Optimization --- p.10
Chapter 2.3.1 --- From Linear to Conic Programming --- p.11
Chapter 2.3.2 --- Second-Order Cone Programming --- p.12
Chapter 2.3.3 --- Semidefinite Programming --- p.12
Chapter 3 --- Fast Graph Laplacian Regularized Kernel Learning --- p.14
Chapter 3.1 --- The Problems --- p.14
Chapter 3.1.1 --- MVU --- p.16
Chapter 3.1.2 --- PCP --- p.17
Chapter 3.1.3 --- Low-Rank Approximation: from SDP to QSDP --- p.18
Chapter 3.2 --- Previous Approach: from QSDP to SDP --- p.20
Chapter 3.3 --- Our Formulation: from QSDP to SQLP --- p.21
Chapter 3.4 --- Experimental Results --- p.23
Chapter 3.4.1 --- The Results --- p.25
Chapter 4 --- Conclusion --- p.28
Bibliography --- p.30
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography